Autism spectrum disorder(autism spectrum disorder,ASD)is a severe neurodevelopmental disorder characterized by social interaction and communication disorders,repetitive behaviors,and narrow interests.Emotional communication disorderis considered a core deficit in ASD.However,individuals with autism vary greatly,the results of the study remain unclear.Based on the similarity between autistic traits and ASD,many researchers have used autistic trait subjects to explore ASD-related issues in recent years.Based on the extraordinary performance of autistic people in musical pitch and the fact that both music and face are important mediums of human emotional communication,this study uses electrophysiological techniques to explore music and facial emotion recognition of autistic trait population in multi-channel and single-channel emotion priming paradigms.This study consists of four EEG experiments.Experiment 1 used themulti-channel emotion priming paradigm to investigate the recognition of music emotion by autistic people.The experiment took chord as starting stimulus,emotional face as target stimulus,manipulated chord consonance and facial emotion respectively.Behavioral results showed that the response time of inconsistent condition was longer than that of multi-channel concordant condition;EEG results showed that the response time of inconsistent condition was longer than that of multi-channel concordant condition.Different from the low autistic trait group,the high autistic trait group had no N400 effect on chord emotion recognition.Experiment 2,used the same priming paradigm to investigate the emotion recognition of autistic trait group.The manipulation of this experiment was the same as that of experiment 1,but the priming stimulus was emotional face and the target stimulus was chord.The results showed that,when the multi-channel of audiovisual emotion is not consistent,people can judge music emotion more quickly and more accurately.The EEG results showed that,unlike the low autistic trait group,the highautistic trait group had no N400 effect on facial emotion recognition.In both Experiment 1 and Experiment 2,a multi-channel emotional priming paradigm was used.It has been suggested that people with ASDmay have difficulty with the entire multi-channel information.In order to exclude the influence of multi-channel information integration,Experiment 3 and experiment 4 examined music and face emotion recognition of autistic trait population by single-channel emotion priming paradigm.In Experiment 3,the harmony of the operating chord,the starting and the target stimulation are all chord.The behavioral results showed that when the auditory emotion of single channel was inconsistent,people could judge music emotion faster and more accurately.The EEG results showed that,unlike the low autistic trait group,the high autistic trait group did not induce P3 component of music emotion.Experiment 4 used the single-channel emotion priming paradigm to investigate the recognition of facial emotion by autistic trait population.The experiment manipulates facial emotion,and both priming and target stimulation are emotional faces.Behavioral results showed that when visual emotion was inconsistent,people judged facial emotion more quickly and more accurately.EEG results showed that it was different from the low autistic trait group.The high autistic trait group did not induce N400 effect in emotion recognition of face pictures.The results show that,compared with the low autistic trait group,the high autistic trait group has difficulty in the emotion recognition of face and music,whether it is multi-channel or single-channel emotion priming.This study is the first time to systematically explore the neural basis of music and facial emotion recognition in autistic individuals from the perspective of electrophysiology.The results of this study will provide empirical evidence for the intervention of facial emotion recognition disorder in autistics. |