| With the development of electronic imaging technology and the popularization of medical imaging equipment,chest X-ray imaging has become one of the most frequently used screening methods and diagnostic bases for discovering thoracic and lung diseases.Due to diverse pathologic findings,and complex imaging features,the traditional diagnostic mode often requires a long duration of performing the manual annotation and comprehensive analysis of chest X-ray images to capture anatomical information.Meanwhile,its diagnostic accuracy relies heavily on the clinical experiences and subjective criteria of the radiologist.In recent years,the advancements of artificial intelligence technology and clinical big data makes it possible to automatically mine and discover meaningful information and knowledge from massive data.Benefiting from the powerful feature extraction and expression capabilities of neural networks,an increasing number of deep learning-based computer-aided diagnosis approaches have been applied to the field of medical image analysis.Despite substantial progress has been made in investigations of the intelligent analysis and application of chest X-ray images,there remain the following challenges:(i)the issue of poor flexibility and generalization capability of multi-disease assisted diagnosis;(ii)the issue of weak robustness to noise interference within the complex background;(iii)the issue of inadequate performance in the inference procedure for label co-occurrence learning;(iv)the issue of missing structured semantic association learning among different images.To solve these issues,the research content of this dissertation is to gain a deeper understanding of the collaborative relationships of intrinsic structures and semantic labels in the chest X-ray image,and proposes a series of efficient deep collaborative learning algorithms to improve the generalization and robustness of diagnostic models,as well as enhance the interpretation and logic of diagnostic decisions,which can effectively assist clinicians in making more accurate diagnoses.The major research innovations of this dissertation are as follows:(1)To overcome the issue of poor flexibility and generalization capability of multidisease assisted diagnosis,we propose a novel feature complementary learning method based on the collaboration of dual-stream asymmetric networks.To establish the cooperative complementary learning of two-stream asymmetric features,the proposed method is designed to build a wider and more efficient network architecture based on their structural cooperativity,which can fully explore and leverage the consistency and differentiation between different image representations.Moreover,our method introduces a well-designed information fusion strategy that combines feature-level and decision-level fusion schemes to integrate and extend the obtained feature representations from different subnetworks.In this way,our model can achieve high-level organization and integration of disease information from different data representations.Based on this principle,an efficient two-stream collaborative training strategy is designed to further optimize the expression ability of the obtained asymmetric features.Experimental results on the benchmark dataset demonstrate the proposed method can provide a better adaptation to complex morphologic signatures,thus increasing the generalization capability of the model.(2)To address the issue of weak robustness to noise interference within the complex background,we propose a novel perception guidance learning method based on the collaboration of global-local visual areas.The proposed method first leverages the techniques of lung segmentation and weakly supervised localization to direct the network’s attention toward the local discriminative regions,which can provide prior guidance for learning discriminative features in successive stages.On the bases of the constraint of the local salient region,our method focuses on combining the local cues and global information to perform cooperative guidance learning based on the collaborative relationships of intrinsic structures for a more refined diagnosis.By exploring and modeling their contextdependent information,the activating effect of the key features in the joint representations will be increased,while the adverse effect brought by noise will be weakened.In this way,our method can achieve an efficient and high-quality integration of the local fine-grained features and global discriminative information,finally improving the algorithm’s robustness to noise.Comparative experiments on large-scale datasets verify that the proposed method can achieve better performance for multi-label disease classification.(3)To deal with the issue of inadequate performance in the inference procedure for label co-occurrence learning,we propose a novel label inference learning method based on the collaboration of multi-disease symbiotic relationships.The proposed method first constructs the knowledge map of the multi-label disease relationship based on the statistical knowledge from the corresponding multi-label chest X-ray image dataset.To conduct the cooperative inference learning of multi-label symbiotic relationships,it introduces Graph Convolutional Network(GCN)to effectively model the co-occurrence relationships and high-order associations between different abnormalities according to their collaborative relationships between the semantic labels.Benefiting from the multi-layer graph knowledge inference mechanism,the disease association information in the knowledge graph can be generalized into a set of associative classifiers with strong discriminatory power.Finally,the obtained classifiers are applied to classify image representations for multilabel prediction outputs.Experimental results on two benchmark datasets show that the proposed method can effectively improve the abilities of domain knowledge matching and clinical disease reasoning,thus improving the interpretability of the constructed models.(4)To solve the issue of missing structured semantic association learning among different images,we propose a novel characterization optimization learning method based on the collaboration of cross-image semantic contents.The proposed method is first designed to calculate similarity scores of semantic contents among different images to generate the knowledge map of the cross-image semantic association.According to the cooperativity of semantic content,the purpose of our method is to further normalize and guide the characterization process of discriminating image data by optimizing the representation errors with constraints of the semantic content similarity measure.Meanwhile,it defines GCN as the metric function to explore the high-order associations of semantic content among different images,which is used to perform the cooperative characterization learning of cross-image semantic contents.Benefiting from the multi-layer graph knowledge inference mechanism,our approach is capable of optimizing the feature representations and guaranteeing the semantic consistency of cooperative characterization learning,thus realistically reflecting the nature of clinical thinking patterns for cross-image diagnosis.Experimental results on different benchmark datasets show that the proposed method can achieve the best experimental performance.To demonstrate the robustness,generalization,and generality of our methods,we further construct an effective extension to solve the task of multi-label classification on large-scale natural scene images. |