| The morbidity and mortality rate of cardiovascular diseases is high,posing a serious threat to human health.Heart sound signals are physiological signals generated by the mechanical movement of the heart,and their changes can reflect the state of the heart and arterial blood vessels,so the identification of cardiovascular diseases can be achieved by classifying heart sound signals.In the classification of heart sound signals,feature extraction is one of the key steps.Since a single feature is not enough to express the complete information of heart sound signals,and simply combining multiple features can not obtain a good classification effect,in order to improve the accuracy and robustness of heart sound signal classification,this thesis proposes a heart sound signal classification method based on multi-feature fusion.Firstly,in view of the fact that wavelet scattering transform obtains translation invariance and deformation invariance by combining wavelet mode and low-pass filter averaging,the heart sound signal can avoid the interference of translation and deformation.The Mel frequency cepstral coefficient can extract the recognizable components of the audio signal,because the heart sound signal and the speech signal have similar characteristics,both are generated by vibration and are unstable,so it is also suitable for the identification of heart sound signal.The Hilbert-Huang transform can adaptively perform time-frequency decomposition according to the local time-varying characteristics of the signal,and also has a very high timefrequency resolution ability,which is especially suitable for the study of non-stationary signals such as heart sound signals,so these three models are used to extract the characteristics of heart sound signals.Secondly,based on the multi-feature proposed above,two multi-feature fusion methods are proposed.One is a multi-feature fusion method based on ReliefF.Firstly,the ReliefF algorithm is used to select and fuse multiple features,and then the machine learning algorithm is used to classify them.Experimental results show that this method can effectively select multiple features,thereby improving the accuracy of heart sound signal classification.In addition,considering that the deep learning algorithm has good performance in processing large amounts of data,this thesis also proposes a multi-feature fusion method based on CNNTCN-Attention.This method extracts features through the parallel structures of convolutional neural network(CNN)and temporal convolutional network(TCN).CNN can adaptively learn features,perform feature extraction and dimension reduction,while TCN can learn long-time series features and maintain the integrity of the original features.In addition,the introduction of Attention Mechanism can adaptively extract features,which improves the flexibility and robustness of the model.The multi-feature input into the CNN-TCNAttention structure can fully explore the complementarity between multiple features,and improve the accuracy and robustness of heart sound signal classification.Compared with the experimental results of the two feature fusion methods,the classification performance of the latter is superior. |