| Language is a symbol system composed of certain rules for human beings to communicate thoughts,feelings,and needs.It is one of the important signs that distinguish humans from animals.However,the neural mechanism that accounts for the audio-visual integration of language is still left unveiled.The neural mechanism underlying audio-visual language interaction,which is a complex cognitive process,involves the coordinated activity of multiple brain regions.Remarkably,our brains are capable of processing language information within milliseconds.Among existing brain signal acquisition techniques,electroencephalography(EEG)offers high temporal resolution in the millisecond range and provides a direct measure of neural electrical activity in the brain,making it a widely used tool in cognitive research.Thus,this study utilized complex network theory and brain network analysis techniques to uncover the neural mechanisms underlying audio-visual language interaction.We conducted systematic research to explore the multimodal information processing mechanism of different language hierarchical structures,the dynamic network information processing mechanism in language audiovisual integration,the network information processing mechanism corresponding to specific rhythms in different modalities,and the language audiovisual competition mechanism.The main work of this dissertation is as follows:1.Research on the audio-visual integration mechanism of different linguistic hierarchy structures in Chinese.Based on previous experimental paradigms,we designed a new audio-visual stimulation paradigm of Chinese hierarchical structure and avoided the contamination of harmonics of different linguistic level responses,and successfully separated EEG signals corresponding to syllables,phrases,and sentences with specific tagging frequencies under different modalities.The results found that audio-visual integration occurred at all linguistic hierarchy structures,but the brain regions involved in integration differed for each linguistic hierarchy level.In particular,the brain activates the left prefrontal cortex when processing sentence-level synchronous audio-visual information.Further,the research used transcranial magnetic stimulation(TMS)to modulate the left prefrontal area.The causal role of the left prefrontal area in integrating audio-visual information at the sentence level was further verified.2.Research on the patterns of brain networks in different hierarchical structures of Chinese under the audio-visual synchronization condition.Based on the analysis method of large-scale EEG time-varying networks,I explored the brain network structures at different stages of language processing.The brain has different network connection patterns when processing different language hierarchical structure information under the condition of audio-visual synchrony: when the participants receive single-word information,the brain displayed a more bottom-up flow derived from the left posterior temporal lobe to the left prefrontal cortex;when they received sentence information,the brain showed a more top-down mechanism with a much stronger information flow from the left prefrontal cortex to the parietal occipital cortex.In addition,I extract the Spatial Pattern of Network(SPN)features of the EEG data by the supervised learning method to distinguish the brain state of the different linguistic hierarchy structures in combination with the voting algorithm.3.Research on brain network connectivity patterns of language-related rhythmic.Based on the collected EEG data of participants performing language tasks.I quantified brain network connectivity patterns at different rhythms using nonparametric Granger causality(n GC).To gain more detailed insight into the spatial and spectral structure of this brain-wide network,I applied nonnegative matrix factorization(NMF)to the grouplevel connectivity data.It was found that different rhythms correspond to different network patterns related to language processing: the low-frequency response corresponds to the process of information flow aggregation in the primary sensory cortex of different modalities.The information flow in the brain dominated by high-frequency responses showed a bottom-up flow from the left posterior temporal lobe to the left prefrontal lobe.The information flow dominated by low-frequency and high-frequency rhythms presents a network pattern of top-down information flow from the left prefrontal cortex to the right occipital cortex.4.Research on the neural mechanism of Chinese audio-visual competition.Based on new Chinese character-phonetic materials,a new experimental paradigm is proposed to study the mechanism of Chinese audio-visual competition.When the visual-text information and auditory-verbal information do not match,the subjects may selectively focus on one of the modalities.Based on the accuracy,the event-related potential(ERP),and the reconfigured pattern under different conditions during the language-related experimental tasks,the results found that under abnormal auditory condition,higher accuracy and larger N400 amplitude,more robust activation in the occipital-parietal region as compared to the abnormal visual condition.These results consistently support the superiority of the auditory modality over the visual modality in processing audiovisual linguistic information.In conclusion,the research value of this dissertation lies in the relevant research on the mechanism of Chinese audio-visual interaction.Combined with frequency tagging and brain network analysis methods,I explore the information processing mechanism of different hierarchical structures in language information processing.The network information processing mechanism corresponding to specific rhythms in different modalities was revealed.The dynamic network information processing mechanism in language audio-visual integration and the network differences between different language hierarchical structures are established.And the understanding of the language audiovisual competition mechanism was realized. |