Font Size: a A A

Neural Network Structure Optimization On Graph Convolutional Neural Network For Recommender System

Posted on:2024-07-24Degree:MasterType:Thesis
Country:ChinaCandidate:Y H WangFull Text:PDF
GTID:2557307178492504Subject:Statistics
Abstract/Summary:PDF Full Text Request
In the current era of artificial intelligence,massive information resources have brought a lot of convenience and change to humanity.Technology such as machine learning and deep learning,which provide intelligent production,intelligent recommendation,autonomous decision making and other functions,have shown positive role and value in improving efficiency,reducing costs and improving user experience and satisfaction.Personalized intelligent recommendation enables the system to recommend appropriate products and services based on the historical data and preferences of the user,which is highly favored for its efficient information mining.However,the sparsity of data and the ever-expanding scale also pose great challenges in resource-limited environments and devices.Model compression is an intelligent technique to reduce the storage and computational complexity of models.By reducing the model parameters and simplifying the model structure,the storage space and computational complexity can be reduced and the need for computational resources can be reduced.At the same time,certain model accuracy and computational efficiency can be guaranteed,which can be used to optimize the recommendation results.In this paper,we focus on the neural network structure in graph convolutional recommendation algorithms and propose a model compression method based on lightweight network structure design and knowledge distillation.The contents and innovations of the research are as follows:(1)Aiming at the problem that the graph convolution recommendation algorithm model is too complex;an improved lightweight recommendation algorithm is proposed based on the technology of model compression.Compared to the baseline method,the two evaluated indexes of recommendation effectiveness improved by 4.6% and 4.9% at the highest levels.In terms of model compression effect,the improved method reduces the time and space occupied in a single iteration of training compared to the standard method.(2)Based on the knowledge distillation framework in model compression,the graph convolution collaborative filtering recommendation algorithm and the weighted lightweight graph convolution recommendation algorithm are selected as the teacherstudent model,respectively.Using the knowledge distillation framework for the output layer,we propose a graph convolutional recommendation algorithm structure based on knowledge distillation.Compared to the benchmark recommendation methods,the proposed method achieves an increase of 7.6% and 2.1% in the highest of the two evaluated indexes of recommendation effect.In terms of model compression effect,the proposed method reduces the training time and spatial footprint of a single iteration.(3)The influence of the network structure of the graph conditional recommendation algorithm on the recommendation effect is analyzed,and different network structure settings are used to carry out comparative experiments,and the optimal network structure settings on the corresponding data sets are determined.(4)The setting of hyperparameter under the framework of knowledge distillation is analyzed.In this paper,we use the lightweight graph convolutional recommendation algorithm based on knowledge distillation.The heatmap is used to show all the experimental results in the hyperparameter setting,for different distillation temperatures and equilibrium weight coefficients.The influence of the distillation temperature and the balance weight coefficient on the model recommendation is analyzed and the optimal hyperparameter settings are determined on different datasets.
Keywords/Search Tags:Recommendation algorithm, Deep Neural Network, Model Compression, Lightweight model, Knowledge Distillation
PDF Full Text Request
Related items