| As the “first health killer”,breast cancer poses a serious threat to the life and health of women.Modern medicine is still hard to release a clear interpretation for the causes of breast cancer.Therefore,early screening and treatment are one of the most effective means to prevent and treat breast cancer.In clinical diagnosis,mammography is one of the most commonly used tools for diagnosing breast diseases.However,due to problems such as time-consuming and inter-observer inconsistency that may be caused by radiologists,Computer-aided Diagnosis(CAD)systems emerged as the times require.With the assistance of highly effective and accurate CAD systems,radiologists can significantly improve the diagnosis in both efficiency and accuracy.Based on deep learning techniques,this paper conducts in-depth research on several key steps in CAD systems and designs a series of novel mammographic recognition algorithms.The specific contents are as follows:(1)Breast mass classification: Automated classification of benign and malignant breast masses is a critical and challenging topic.In recent years,many studies based on Convolutional Neural Networks(CNN)have been proposed to solve this problem.However,most of these CNN-based methods ignore the effective global context information,and their methods do not further analysis of the reliability and interpretability of the CNN model,which is not corresponding to the clinical diagnosis.To address the above problems,this study firstly proposes a Multi-level Global-guided Branch-attention Network(MGBN),aiming to fully leverage multi-level global context information to refine feature representations.Specifically,the MGBN includes a stem module and a branch module.The former extracts local information through the standard local convolution operation of Res Net-50,while the latter can extract global context information via global pooling and Multi-layer Perceptron(MLP)operations,thereby establishing the relationships of different feature levels.The final prediction is computed jointly with local information and global information.Then,the coarse localization map of the model is visualized using Gradient-weighted Class Activation Mapping(Grad-CAM),and the reliability and interpretability of the proposed classification network are discussed,which is significant in clinical diagnosis.Finally,the proposed MGBN is adequately validated on two public breast mass classification datasets including DDSM and INbreast,with AUCs of 0.8375 and0.9311,respectively,achieving state-of-the-art(SOTA)performance.(2)Breast mass segmentation: Existing algorithms of breast mass segmentation mainly use mass-centered patches to achieve mass segmentation,which is time-consuming and unstable in clinical diagnosis.Consequently,this paper propose a novel Dual Contextual Affinity Network(DCANet)for mass segmentation in full-field mammograms.Based on the encoder-decoder structure,two lightweight yet effective context affinity modules including the Global-guided Affinity Module(GAM)and the Local-guided Affinity Module(LAM)are proposed.The former aggregates the features integrated by all positions and captures long-range contextual dependencies,aiming to enhance feature representations of homogeneous regions.The latter emphasizes semantic information around each position and exploits contextual affinity based on the local field-of-view,aiming to improve the indistinction among heterogeneous regions.The proposed DCANet is greatly demonstrated on two public breast databases including DDSM and INbreast,achieving Dice Similarity Coefficient(DSC)of 85.95% and 84.65%,respectively.Both segmentation performance and computational efficiency outperform the current SOTA methods.According to extensive qualitative and quantitative analyses,this paper believes that the proposed fully automated approach has sufficient robustness to provide fast and accurate diagnoses for clinical breast mass segmentation.(3)Unsupervised image-to-image translation of mammograms: Domain shift refers to the inconsistency of feature distribution between the training set and the test set,which may hinder the deep learning model that performs well on the training set to generalize on the clinical test set.To alleviate this problem,based on Disentangled Representation Learning and Generative Adversarial Networks(GAN),a novel unsupervised image-to-image translation model,called CCR-GAN,is proposed.Specifically,the input image is first decomposed into the content representation and the attribute representation.Then,the disentangled content information and attribute information are cross-combined to perform image translation.Finally,the translated image is subjected to secondary disentanglement and then reconstructed to the original input image.In order to improve the semantic relationships between the two disentangled representations,a Content-consistent Regularization Module(CCRM)is proposed to adaptively retrieve the consistency of the content information obtained from the two disentangled steps,thereby improving the cycle consistency of the entire model.Experimental results show that the proposed unsupervised image-to-image translation algorithm has SOTA performance on multiple datasets. |