| Graph deep learning models have achieveds great success in recent years,beneficial in large part to training on big data.The classical graph data mining usually adopts representation learning method.That is to say,nodes and relationships in graph data are input into graph deep learning model to obtain the embedding vector containing graph data information,and then the relevant operations are performed on the vector to complete the downstream tasks on graph data.However,when dataset on the graph is too small,the performance of graph deep learning model will drop sharply,which makes it relatively difficult to mine and analyze graph data.With the rise of meta-learning to solve few-shot problems,there have been a few algorithms that apply meta-learning to solve few-shot problems in graph data mining.In the classical meta-learning algorithm,the losses of all meta-training subtasks in the meta-training procedure are equal weights to update the parameters of meta-learner,which means that meta-training subtasks are equal weights to transmit information to meta-testing subtasks.However,in general,the more related two subtasks are,the more important the information conveyed by the subtask is to another subtask.Therefore,equal weights is obviously unreasonable.Based on this,this paper integrates the attention mechanism into the classical meta-learning framework,and then uses the optimized meta-learning framework to solve the few-shot problem of node classification task.Firstly,this paper defines the meta-learning process on node classification task in detail,and clarifies the problem that subtask division of original dataset under a single node way will destroy local edge information of graph.Original dataset is divided by the way of subgraph,and it is theoretically proved that the subgraph can preserve the local information of graph.Then,that losses of all sub-tasks of meta-training update the meta-learner with equal weights in meta-training procedure of classical meta-learning is a problem.Therefore,an attention mechanism is introduced by utilizing the property of multiple meta-training sub-tasks to transmit unequal information to the meta-test sub-tasks.Euclidean distance,cosine similarity,KL divergence and other methods are used to capture the data distribution difference information between the meta-training subtasks and the meta-test subtasks,and the structural similarity is used to capture structural difference information of subgraphs between the meta-training subtasks and the meta-test subtasks.Then,information weight of the meta-training subtasks is calculated,and the information weight is used to weight the update process of the outer loop of the meta-training to complete the optimization of the classical meta-learning framework.Finally,this paper uses the optimized meta-learning framework to complete the few-shot node classification task.The experiments are carried out on Cite Seer and Cora dataset,which are two classic datasets and are usually used for node classification task.The accuracy of node classification is used to verify the effect of the model.The experimental results of Algorithms applied to node classification modules are compared.In addition,this paper also compares the experimental results of different GNN modules within the same optimized meta learning framework.Experimental results show that,compared with other classical algorithms,our model achieves the best results under same few-shot setting of Cite Seer and Cora datasets.Compared with the GCN and Graph SAGE modules,GAT module achieves the best results on the few-shot node classification task in the optimized meta-learning framework. |