| There are many patients with language-related disorders in my country,and the loss of language communication ability made these humans unable to better receive treatment and integrate into society.EEG(Electro-encephalogram)-based speech synthesis technology aims to bypass physiological vocal organs such as the larynx and vocal cords,directly decode and synthesize speech through the interpretation and recognition of EEG signals,and build a bridge for normal communication for people with language barriers.The rapid development of deep learning has brought new opportunities for speech synthesis based on EEG signals.This paper mainly studied the original EEG signal artifact processing,classification interpretation and speech synthesis based on deep learning.The specific work and research are as follows.(1)Aiming at the problem that the EMG(Electro-myography)artifacts and EOG(Electro-oculogram)artifacts in the original EEG signals are difficult to effectively remove,resulting in low signal-to-noise ratio of EEG signals,a residual-attention-parallel convolutional neural network original EEG signal artifact removal model studied and implemented in this paper,which can effectively remove the EMG artifact and EOG artifact in the original EEG signal.The method model was improved on the basis of the traditional CNN(Convolutional Neural Network)network,adding the residual parallel network mechanism,expanding the width of the network,and realizing the multi-scale extraction and fusion of data features.At the same time,the channel attention mechanism was introduced,and attention learning was carried out on the basis of feature fusion,which weakened the influence of secondary features and enhanced the feature fusion ability of the network.The artifact processing model was established by learning the multi-scale fusion features of EMG artifact and EOG artifact to achieve effective artifact removal.Finally,the effectiveness of the method was verified by comparison with different deep learning networks,which provided high-quality data for subsequent EEG signal classification.(2)Aiming at the problems of insufficient deep feature extraction and low classification accuracy in EEG signal classification and mapping synthetic speech,a densely connected CNN network EEG signal classification model was studied and implemented in this paper.By constructing and connecting densely connected modules in series to realize the reuse of EEG data feature maps,the feature extraction ability of EEG signals was enhanced,and the accurate classification of EEG signals based on speech imagination was realized.Finally,the test and verification are carried out on the public data set and the self-built data set,and the comparison and analysis with different deep learning EEG signal classification methods are carried out,and it is verified that the performance of the densely connected CNN network EEG signal classification model was better.(3)In order to realize the process scene of EEG signal acquisition,processing,interpretation and classification,and final output of classified speech,an EEG signal acquisition module based on active electrodes and high-resolution AD converters was adopted in this paper to obtain high-quality EEG signals.On the basis of EEG signal acquisition,the software flow design of EEG signal synthesis speech was firstly carried out,and then the functional requirements of the software and the layout design of the software interface were analyzed,and finally the interface layout of the software was completed.It realized the display of original EEG signals,processed pure EEG signals,classification and recognition results,and recognition accuracy.It also realized the function of real-time speech synthesis according to the classification result. |