The rapid development of digital image acquisition technology provides a solid foundation for building an effective computer-aided diagnosis system(CAD).As we all know,building an effective CAD system can help doctors quickly and accurately to diagnose the patients’ diseases.In this thesis,how to classify medical images quickly and accurately is a key technical problem that needs to be solved effectively when building an effective CAD system.I· n recent years,a large number of researchers have studied the automatic classification of medical images in CAD system.However,there are still many problems and shortcomings,which are embodied in:1)A large number of studies have used traditional features combined with classifiers to classify medical images.The disadvantage of this method is that it requires manual selection of image features,and the selected features are shallow representation of medical images;2)Medical image data sets are small in scale.Simply training a deep network,even if the deep representation of the image is obtained,it is mostly a single-scale representation of the image;3)Many researches on the classification of medical images using deep neural network are usually conducted by training a neural network,and few studies have been done on the classification of medical images by integrating a variety of different neural network;4)At present,a large number of research on automatic classification of medical images are focused on single feature space or subspace,few studies on multiple feature spaces or their subspaces of images;5)In many cases,the resolution of images in medical image data sets is not the same.In order to train the deep network,most of the methods are to zoom or tailor the images with different resolution directly to a fixed resolution,which will make the image with different resolution lose image information inconsistently.In view of the above problems and shortcomings,a thorough study is conducted in this thesis,and the main achievements are as follows: 1)A new model is proposed to classify medical images.The model can automaticallyclassify medical images by combining the deep features extracted by a coding networkwith the statistical features.This thesis argues that the statistical features(including color,texture,shape or some combination of them)combined with classifiers are mostly specificproblems for specific analysis,and these features can only be used for shallowrepresentation of medical images.However,the emergence of deep learning provides anend-to-end model to abstract images and it ultimately get enough high-level features tomedical images,but the model is not explanatory enough.In order to solve theproblem that traditional features can only represent medical images shallowly andimprove the interpretability of deep model,we designed an algorithm that can combinedepth features and shallow features to classify medical images.The classificationaccuracy of the proposed algorithm is 90.2% and 90.1% respectively in HIS2828 and ISIC2017.Compared with other algorithms,the proposed algorithm is improved by 4.5% and1.6% respectively,which proves the effectiveness of the algorithm.2)An algorithm based on multi-scale deep feature is proposed to classify the skinhispathological tissue images,so as to solve the problem of high resolution of skinhispathological tissue images.If a deep convolution network is trained separately toclassify the image,it is necessary to zoom the image to a large extent,which will lead tothe decline of the accuracy of image classification.The algorithm trains a convolutionalneural network as a coding network,then extracts multi-scale deep features and fusesthem together to achieve automatic classification of skin hispathological images.Finallythe proposed algorithm is validated on the Skin Disease dataset(SDT).The classificationaccuracy reaches 95.3%,which is 2.9% higher than other algorithms.3)An algorithm is proposed to classify breast cancer medical images by integrating differenttypes of deep neural networks.Using ensemble algorithm to classify medical image isusually carried out in one feature space or its subspace.In order to increase the diversityof ensemble algorithms and improve the accuracy of classification algorithms,this thesisproposes an algorithm that can integrate multiple feature spaces and subspaces of medicalimages.The algorithm integrates three different structures of deep neural networks(VGG16 deep network,network in network and GoogLeNet network)to extractmulti-scale deep features of breast cancer images and to represent them in deep level.These representations can complement each other and is verified in correspondingexperiments.The algorithm achieves 88.38% and 86.99% classification accuracy forbenign and malignant tumors on Breast Cancer Histopathological Database(BreakHis),which is 9.2% and 1.9% higher than other algorithms.Finally,Friedman test is added toevaluate all classification algorithms from a statistical point of view.4)An automatic classification algorithm of medical image based on spatial pyramid deepnetwork is proposed to extract multi-scale deep features and integrate them.Thisalgorithm can solve the problem of information loss caused by different resolution ofmedical image and avoid scaling or clipping them directly to a fixed resolution,so as toensure the accuracy of classification.It used the different resolution image as input andthen trained spatial pyramid network as encoding network.The extracted image feature map can be pooled by pyramid operation.After the operation,multi-scale deep features can be obtained.Then,we use the random forest algorithm to integrate the extracted multi-scale deep features and vote to get the final classification results.The accuracy of blood cell(BS)and chest-xray-pneumonia(CXP)is 91.51% and 92.5% respectively.Compared with other algorithms,the proposed algorithm improves by 1.99% and 4.0%. |