| Deep learning models,supported by big data,have performed very well in areas such as computer vision and natural language processing in recent years.However,there may be insufficient training data in practical applications,and a small number of training samples cannot describe the distribution of the whole dataset,and deep learning models may be overfitted to this small amount of data,resulting in poor model performance.Therefore,few-shot learning is proposed to solve this problem.How to create more effective methods to train deep learning models based on a limited number of training samples,so that the models can generalize to novel samples,is the goal of few-shot learning research.Although existing few-shot learning methods have achieved good results in solving few-shot problems,there are still some challenges.On the one hand,some methods that use Graph Neural Networks for few-shot image classification fail to incorporate additional prior information and do not fully leverage the strong inference capabilities of Graph Neural Network models.On the other hand,most few-shot learning methods based on data augmentation do not guarantee the quality of synthesized samples well.To address the above problems,improved methods are proposed in this paper.This paper proposes a few-shot image classification method based on DEP-Net(Delta-Encoder Prototypical Network)to address the issue of inadequate sample quality in most data augmentation-based few-shot learning methods.An improved self-attentive mechanism is used to compute the prototype representation of a class,and the encoder learns the transferable deformations between samples from the same class and the prototype of that class.The decoder then transfers the extracted deformations to the novel class to synthesize more samples.The deformations between the intra-class samples and the prototype contain more rich semantic information,which helps to ensure the diversity of the synthesized samples.The reality and discriminability of the synthesized samples are ensured by designing similarity loss and classification loss.Our method improves the quality of the synthesized samples to a certain extent,provides more comprehensive prior information for the model,and alleviates the problems of easy overfitting and poor generalization ability.To verify the effectiveness and superiority of our proposed method,comparative experiments were conducted on three benchmark datasets,including mini Image Net,CUB,and CIFAR-FS.The experimental results show that the proposed method can effectively improve the accuracy of few-shot image classification.To address the problem of inadequate incorporation of prior information and poor use of the strong inference capabilities of Graph Neural Networks in methods that use Graph Neural Networks for few-shot image classification,this paper proposes a few-shot learning method based on diffusion models and Graph Neural Networks.Because diffusion models have stronger generation capabilities than variational autoencoders and other models,this paper uses diffusion models to enrich the information in the training set.First,the images from a classification task are embedded in latent space and input into diffusion models,which synthesizes more samples that fit the true distribution according to the supervised information of labeled samples.Then,the expanded sample set is input into the graph neural network,where the samples are used as nodes in the graph and the relationships between samples are used as edges in the graph.The node features and edge features are updated iteratively.Finally,the label of the test sample is predicted based on the edge information between the test sample and the labeled sample.Experimental results on the benchmark dataset have verified the effectiveness of our method,which has improved upon previous graph neural network-based methods for few-shot learning. |