Font Size: a A A

Research On Pre Training Model Based On Graph Neural Network

Posted on:2024-05-12Degree:MasterType:Thesis
Country:ChinaCandidate:X L XingFull Text:PDF
GTID:2568307160955519Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Graph Neural Networks(GNN)is an emerging intelligent algorithm in recent years.It integrates deep learning and graph computing to have better cognitive and problemsolving capabilities.It is widely used in search,recommendation,areas of risk control.Graph neural networks have been proven to be a powerful tool for modeling graphstructured data.However,training graph neural networks usually requires a large amount of task specific tag data,which is often very expensive to obtain.Using graph neural network pre training models can better address this disadvantage.Due to changes in topology,graphs belong to irregular non European data.Compared to the twodimensional/one-dimensional grid European data where image/language data reside,non European data is more common and complex.Graph pre training remains a challenging issue..This thesis first reviews the graph neural network pre-training model as a whole,summarizes the advantages and disadvantages of current mainstream models,and then proposes a new graph neural network pre-training model.The specific research is as follows:(1)A pre-training model for deep graph neural networks combining graph residuals and reversible grouping has been proposed to address the problems of shallow encoding networks and weak generalization capabilities in pre-training models for graph neural networks.In this model,graph generation is used as the pre-training task,and residual and reversible grouping techniques are combined to increase the depth of the graph neural network.Experiments on open academic graphs and other datasets have shown that this model improves performance in large-scale graph learning tasks for node attribute prediction and graph attribute prediction.(2)The design of the pre-training task is crucial for the model to achieve positive transfer and learn the inherent characteristic attributes of the graph domain,which often determines the training effect of the model.Most of the existing GNN pre-training methods learn representation by solving one pre-training task,and different pre-training tasks can provide different supervision signals from different angles,considering the design of multiple pre-training tasks,which is helpful for GNN pre-training The model learns more valuable information.This thesis proposes a graph neural network-oriented multi-task pre-training model,which extracts sufficient and relevant contextual information for each user-item pair from a heterogeneous graph,and then uses a heterogeneous subgraph network to construct subgraphs and fusion edge attention mechanism,by designing a multi-task pre-training strategy,the local and global intrinsic information can be gradually learned from the constructed sub-graph.Among them,we designed graph reconstruction and sub-graph comparison learning as pre-training tasks,and carried out from the two strategies of generation and comparison Pre-trained to learn more efficient representations.Experiments on multiple real datasets demonstrate the effectiveness of the proposed method against many competitive baselines,especially when only limited training data is available.
Keywords/Search Tags:graph neural network, pre-training, graph structure data, graph representation learning
PDF Full Text Request
Related items