| As a common biometric signal,heart sound signals are playing an increasingly important role in the field of auscultation-assisted therapy.Currently,heart sound-based auscultation techniques heavily rely on the experience of doctors and medical environments,leading to a high probability of misdiagnosis.Therefore,analyzing and researching the physiological signals of the heart through classification and recognition techniques has profound theoretical significance and application prospects.As artificial intelligence develops,deep learning(DL)based feature processing,module recognition,and auscultation-assisted therapy using heart sound signals have significant research value.This article analyzes the feature information of heart sounds in detail,focuses on several feature extraction techniques,and combines them with deep learning for classification and recognition.The main aspects of this study are summarized as follows:(1)This study analyzes the time-frequency characteristics of heart sounds and proposes using a second-order Butterworth filter to remove high and low-frequency noise in the preprocessing stage,which maximizes the retention of feature information while reducing noise complexity.Furthermore,the study uses preprocessing techniques such as equidistant segmentation,downsampling,and normalization to obtain heart sound signals suitable for subsequent experiments,while also reducing feature dimensions and expanding the experimental data set.(2)This study compared the common time-frequency analysis method represented by wavelet transform with the power spectrum method represented by Fast Fourier Transform(FFT),combined the advantages of spectral analysis method and wavelet transform,proposed a Based on the Wavelet Bispectral Analysis(WBA)feature extraction method,the extracted features are input into the improved Convolutional Neural Network(CNN).This method uses the complex Gaussian wavelet(CGT)as the wavelet basis function.Compared with the traditional low-order spectrum analysis method,the wavelet bispectrum can more effectively take into account the time-frequency characteristics,and has a strong ability to distinguish nonstationary signals,and has a strong ability to suppress Gaussian noise,which is beneficial to the feature extraction of noisy signals.Experiments have proved that this research is based on WBA and improved CNN model for heart sound classification,which improves the calculation speed and ensures the precision and accuracy of the classification results.(3)Given the complexity of manual feature extraction and the autonomy of deep learning,this study further proposes an end-to-end heart sound classification method based on the above classification method.This method directly inputs raw heart sounds for training and classification,fully utilizes the powerful feature extraction ability of the CNN model,and introduces the multi-head attention mechanism(MHA)in the attention mechanism(AM)to further strengthen feature weights.This study also compensates for the lack of MHA in handling temporal information by using Long Short-Term Memory(LSTM).The experimental results show that the MHA-LSTM model has achieved excellent recognition results on both binary and multi-classification datasets,with high accuracy and robustness. |