| The preoperative examination of breast tumors mainly relies on various medical imaging manifestations of the tumor.Ultrasound imaging has become the most commonly used diagnostic basis due to its low price and no ionizing radiation.However,manual discrimination relies heavily on clinical experience,which exposes the problems of high false positives,over-biopsy,and over-diagnosis.Therefore,artificial intelligence is applied to the stage of disease screening to quantify and visualize diagnostic indicators and provide objective opinions for medical clinical diagnosis.Based on the imaging characteristics of different data,a dual-branch multi-task neural network structure is designed to realize accurate and automatic identification of breast tumors by fusing the ultrasound data of two different modalities of samples.First,this thesis uses Brightness-mode Ultrasound of breast samples and its corresponding Contrast-enhanced Ultrasound to jointly show the different levels of characteristics of the tumor.For the features of B-mode ultrasound data that the gray scale changes are small and the nodule-like regions overlap,the feature extraction model’s insufficient learning ability for key features will lead to classification errors.Therefore,in order to enhance the model’s analysis of the important regions,this thesis proposes a guided attention mechanism,which uses the segmented nodule mask(ROI-mask)as the guiding attention to correct the shallow spatial features and make the model focus on the region of interest and surrounding areas of breast ultrasound lesions.Secondly,for the noise data with overlapping benign and malignant morphological representations of breast nodules,the corresponding contrast-enhanced ultrasound features are added to assist in the discrimination.Through the constraint training of multi-label classification,the semantic representative features of enhanced pathological results are obtained.At the same time,the graph convolutional network is used to strengthen the association between the label nodes,and a new node feature update method is proposed according to the multi-label association characteristics of the contrast enhancement,so that the model can learn more accurate medical middle-level semantic mapping features.Finally,the two modal features are combined to determine the classification result,which provides richer and more reliable discrimination deep information for the classification task.This thesis sorts out 1093 cases of dual-mode breast ultrasound data set(Dual-mode Breast Ultrasound Dataset,DM-Breast(benign: malignant= 562:531)).The classification accuracy on this data set is 90%,and the specificity is 88%,the sensitivity is 91%,and the AUC is 0.92,which is a certain improvement compared with commonly used classification algorithms.The performed ablation experiments show the rationality of each task design and the effectiveness of feature fusion.The main innovative ideas of this paper focus on the learning of key image information,the knowledge discovery ability of the model,and the improvement of generalization ability.Experimental results prove that the clinical diagnosis process has guiding significance for the design of computer algorithms.The breast tumor diagnosis method fused with contrast-enhanced ultrasound features proposed in this paper can improve the accuracy and robustness of intelligent diagnosis,and provide reference value for clinical diagnosis. |