Font Size: a A A

Research On Recommendation Algorithm Based On Graph Neural Network

Posted on:2024-06-30Degree:MasterType:Thesis
Country:ChinaCandidate:Y P ChangFull Text:PDF
GTID:2568306923471384Subject:Communication engineering
Abstract/Summary:PDF Full Text Request
In recent years,graph neural network(GNN)have achieved remarkable performance in the field of recommendation algorithms.However,when GNN are stacked in multiple layers,the embedding of each node becomes extremely similar in the graph,leading to a decrease in the performance of the graph model,which is referred to as the over-smoothing problem of GNN.Since deep neural network generally exhibit better expression and inference capabilities,addressing the over-smoothing problem of GNN has become a current research focus.By analyzing the causes of over-smoothing in deep GNN,this paper proposes two effective methods to alleviate the over-smoothing in deep GNN and applies them to GNN-based recommendation algorithms to improve model performance.The specific contributions are as follows.To address the poor information propagation and low information extraction efficiency in deep GNN,this paper proposes A Joint Graph Dropping Edge and Initial Residual Connection Method for Graph Neural Network:DI-GNN.This model combines graph neural network edge pruning and residual connection methods and is mainly composed of three parts:graph edge pruning module,graph convolution module,and graph residual connection module.The graph edge pruning module prunes the graph before the graph convolution module,processing the input data by randomly dropping edges to increase the diversity of model input data.The graph convolution module adopts a lightweight graph convolution model,retaining only the neighborhood aggregation function of graph convolution and stacking multiple layers of graph convolution modules to mine high-order relational information between nodes.The graph residual connection module directly inputs the initial embedding of nodes into the deep graph structure,enabling fast information transmission and effective utilization of information.DI-GNN optimizes the information propagation mechanism in GNN through the introduction of graph edge dropping and residual connection methods,thereby enhancing the model’s learning ability in deep graph structures.Experimental results show that DI-GNN outperforms LightGCN in recommendation performance on the Yelp2018,Amazon-book,and Gowalla datasets.Specifically,in terms of recall rate,DI-GNN improves by 2.53%,2.40%,and 0.66%compared to LightGCN,respectively.The effectiveness analysis of the model suggests that DI-GNN can effectively mitigate the over-smoothing in deep GNN and further improve the recommendation performance by stacking more graph convolution layers.To tackle the feature diversity degradation problem in deep GNN,this paper proposes A Jump Node Aggregation Method for Graph Convolution Network:JNAM.Specifically,during the graph convolution process,JNAM samples nodes according to a biased sampling strategy and jumps the graph convolution operation by directly outputting the input features of the sampled nodes.The biased sampling strategy prefers to sample nodes with higher degrees in the graph to prevent the disappearance of node feature diversity and maintain the stability of model learning.In deep GNN,the jump-node aggregation method can help nodes with higher degrees retain more effective information during message propagation and more effectively address the feature diversity degradation problem in deep GNN.Experimental results show that JNAM outperforms LightGCN in recommendation performance on the Yelp2018,Amazon-book,and Gowalla datasets.Specifically,in terms of recall rate,JNAM improves by 7.12%,6.23%,and 3.08%compared to LightGCN,respectively.The effectiveness analysis of the model suggests that,for different datasets,JNAM can achieve better performance by setting different node sampling rates.
Keywords/Search Tags:Recommender systems, graph neural network, over-smoothing
PDF Full Text Request
Related items