| More than 400 million people worldwide and more than 100 million ones in China suffer from diabetes.A third of diabetics have complications that can lead to visual impairment or even blindness——diabetic retinopathy.Therefore,examination of the fundus is a need for every person with diabetes,the more serious,the more frequency of the examination.And so many examinations not only cause a great burden on medical personnel,but also increase the state’s medical expenditure.In order to check more economically,efficiently and accurately,it is necessary to establish an automatic computer diagnosis system,so this work has become an important research topic for researchers.With the great progress of deep learning in image classification task,it has become a research hotspot to establish an efficient automatic diabetes diagnosis system by using this technology.However,due to the characteristics of medical data itself(belonging to small data sets and existing data imbalance),the effect of using deep learning to classify diseases is not ideal.In view of the above problems,and in order to achieve efficient and accurate automatic detection of diabetes,this paper studies the following aspects:Firstly,in the pre-training generation adversarial network,the transfer learning technology is used to realize the purpose of generating clear DR images by using DR small data sets.The FID score of image diversity and quality is 86,while that of non-transfer-learning is 293,which indicates that the image quality generated by transfer learning method is better.Secondly,in order to increase the controllability of the generated samples,the generated adversarial network is modified,and the label is embedded into the latter layer of the classifier parameter fixed layer.Both the transfer learning and the label information are added to make it a conditional generative adversarial network,which gives more controllability to the data generation task.The DR fundus image generated by this method has more detail information of the lesions,and the control of the generated image category is improved by the application of the category label.Our conditional generative adversarial network has a FID score of 287 from the unconditional generative adversarial network,which is much higher than the transfer learning condition generated adversarial network generated picture FID score 124.From the generated picture,our condition based on transfer learning generates more sample foci and better diversity.By contrasting FID measure,the image quality and diversity of the generated adversarial network and the conditional generated adversarial network are slightly worse than that of the unconditional generated adversarial network,but it is found that the images obtained by the conditional generated adversarial network will have more lesions,but the vascular detail information is slightly worse than the unconditional network,which reflects that the network pays more attention to the generated lesions and less attention to the common vascular information.An interpretable classifier based on transfer learning and Dense Net121 is proposed.On the pre-trained Dense Net121 network,the transfer learning technology is used to change part of the network structure to realize the DR automatic diagnosis task.The sensitivity of the experimental results is 0.77,the specificity is 0.91,and the accuracy rate is 88%.At the same time,the grad-weighted class activation mapping is used to visualize the classification results of the classifier.In this way,it can be shown which part of the input image has a greater impact on the classification results of the classifier.Through this visualization,it is found that although the classifier achieves high accuracy,the classifier does not always pay attention to those lesion areas,and there is a bias against fundus image samples with fewer categories.This reflects the problems of the model trained by small unbalanced samples,which also gives us a better understanding of the shortcomings of our trained model. |