For the neurodegenerative brain diseases such as Alzheimer’s Disease(AD)and HIV-associated Neurocognitive Disorders(HAND),accurate diagnosis and targeted treatment at the early stage of the disease may delay or reverse the pathological process of the disease,which has important clinical value and scientific significance for the early treatment of patients and the improvement of life quality.At present,the diagnosis of AD or HAND patients is still based on neurological scales such as psychological and linguistic scales,but since there are no clear brain lesion characteristics and definite diagnostic basis for these neuroprogressive brain diseases,an objective diagnosis method is urgently needed.At present,traditional deep convolutional neural networks have achieved some results in the field of medical image classification,but regarding the brain disease images of HAND patients,such as MRI and CT,the number of samples cannot support the traditional deep learning methods to train an auxiliary diagnostic model with good classification performance due to the shortage of medical personnel and the high cost of image labeling,and the stage classification problem regarding HAND belongs to the typical small sample The stage classification problem about HAND is a typical medical image classification problem with few samples.In summary,this paper focuses on the ANI/NC binary classification problem of the HAND dataset provided by Beijing You’an Hospital of Capital Medical University,and conducts a study on the field of classification of MRI medical images with few samples,and designs a binary classification model of brain MRI medical images of HAND patients in early stages(ANI)/non-ill subjects(NC).One of the main works and innovations in this paper:to address the problem of insufficient sample data of HAND patients,a classification model of MRI medical images with few samples based on meta-learning approach is proposed,firstly,a series of dichotomous classification tasks are constructed based on four different stages of MRI brain images(AD,LMCI,EMCI,NC)from the Alzheimer’s disease public dataset ADNI-1,in the form of task space of The images are input to the TAML classification model in the form of task space,and the binary classification capability of the model is generalized to the target task ANI/NC classification after meta-training,so as to reduce the dependence of the classification model on the number of real clinical samples.However,in the traditional meta-learning method,each meta-training task is constructed with randomly selected samples,and the simple sample selection will lead to the reduction of task complexity,i.e.,simple tasks,which will negatively affect the model strength,i.e.,the generalization performance,and lead the model to a local optimal solution with general effects.To address this problem,this paper proposes a task-based augmentation meta-learning classification model(Task-Augmentation Meta-Learning,TAML),in which the classification accuracy of each class is calculated based on the current task samples in all training tasks of each meta-batch of the meta-training,and a new augmentation task is constructed in real time to the model base The parameters of the baselearners are updated twice,and the model parameters are given full optimization during each iteration by the augmentation tasks created in real time to compensate for the negative impact of simple samples on the model strength.According to the experimental results,TAML outperforms the traditional convolutional neural network classification model(3D CNN),migration learning methods,other meta-learning methods(MAML)and metric learning methods in the case of target classification tasks with few samples.The second main work and innovation point of this paper:to address the domain offset problem of generalizing the meta-learning model from the ADNI-1 public dataset(source domain)to the HAND target dataset(target domain).In this paper,we design a Task-Space Expansion(TSE)module based on the meta-training images to expand the distribution boundary of the meta-training tasks by simulating the increase of image types in the source domain dataset,so that the model can learn the classification process of more different binary classification tasks and improve the domain generalization ability of the model.Meanwhile,in the HAND target classification stage,for the problem of inadequate image feature extraction based on single modal image for training,this paper designs a multimodal feature fusion module based on two modal images,sMRI and rs-fMRI,so that the model can fully learn the feature information of both images.The final experiments show that by adding the task space expansion and multimodal fusion module,the meta-learning and multimodal-based HAND classification model achieves the best classification performance compared with the previously proposed classification models. |