Accurate judgment and recognition of emotions is an important part of face-to-face communication.In daily social interactions,not only facial expressions but also emotional prosody are involved.When facial expressions and emotional prosody occur simultaneously,visual and auditory emotional information may be integrated.This study chooses both pleasant and angry emotions,using monosyllabic interjections such as "yeah","heng",etc.as auditory stimuli,static facial expression pictures as visual stimuli,visual stimuli and auditory stimuli remain consistent in emotion valence and simultaneously presented as audio-visual stimulation,which constitutes three kinds of stimuli: sight,hearing and audiovisual.The experiment is divided into behavioral experiment and ERP experiment.The effect of audio-visual double-focus point control on the integration of two-channel emotional information is adopted.The emotional classification task of forced selection is used to require the participants to positively and negatively classify the perceived emotions.The two-factor analysis of variance processed the data results and found that:(1)The audio-visual dual-channel response was significantly shorter than the visual channel(p <0.05)and the auditory channel(p <0.01),showing a redundant signal effect.(2)The N100 amplitude of the audio-visual dual channel is significantly more negative than the single-channel visual,auditory and single-channel visual and auditory(p <0.001),emotionally specific,and exhibits significant redundant signal effects and superadditivity.The N100 latency of the audio-visual dual channel was longer than that of the visual channel,shorter than the auditory channel(p <0.001),and the difference between the single-channel visual and auditory latency was not significant,showing no redundant signal effects and superadditivity.(3)The P200 amplitude of the audio-visual dual channel is larger than the single-channel visual and auditory but the difference with the visual has not reached the significant level,and the redundant signal effect is not exhibited;the P200 amplitude of the audio-visual dual channel is larger than the sum of the single-channel visual and auditory(p <0.001),showing significant over-accuracy;P200 latency of audio-visual dual channel did not show redundant signal effects and superadditivity,but single-channel visual plus auditory P200 latency(212.75ms)and audio-visual emotion integration wave(AVI)The P200 latency(195.48 ms)was significantly different(p < 0.01).(4)The average amplitude of LPC of audio-visual dual-channel negative emotions was significantly greater than that of positive emotions(p <0.01),and it was emotionally specific;the average amplitude of audio-visual dual-channel was smaller than that of visual channels,and greater than that of auditory channels(p <0.01).Signal effect;the average amplitude of the audio-visual dual channel is greater than the sum of single-channel vision and hearing(p < 0.001),showing significant superadditivity.According to the above research results,the following conclusions can be drawn that consistent audio-visual dual-channel emotional information can be integrated at the same time,and this integration is a multi-stage continuous processing process.In the early 100 ms of the integration of dual-channel emotional information,the dual-channel emotional information has been invested in a large amount of attention resources,while single-channel information processing and dual-channel interactive processing coexist.The cognitive processing component of dual channel integration is increasing in about 200 ms.Moreover,the speed advantage of dual-channel emotional integration is reflected in the P200 window.Therefore,this study believes that it is very likely that the redundant signal effect exhibited by the two-channel emotional information integration behavior is the result of a large amount of attention resource input in the early stage. |