| With the rapid development of short video sharing platforms,it is more and more important to provide users with personalized video recommendation services from massive videos.The data on the video sharing platform is diverse and comprehensive.Not only does it contain a large amount of user historical behavior data,but the video itself has multimodal information.However,due to the particularity of the "semantic gap" between different modal information of videos,when the existing recommendation methods use richer multi-modal information to model users’ interests,they are often unable to understand the user’s modalities of videos.A fine-grained measure of user preferences for videos in preferences.In addition,many researchers are also devoted to studying the accuracy of personalized recommendation,but they ignore the diversified requirements of recommendation,and evaluate the quality of recommendation results in a more comprehensive way.This paper mainly studies the video recommendation method based on Graph Convolutional Networks(GCN).The main research contents are as follows:(1)In view of the fact that the existing video recommendation methods only simply fuse multi-modal features,and the existing GCN-based recommendation methods have the problem of node noise interference in learning the embedding representation of users and video nodes,a self-A Multimodal Fusion Video Recommendation Model for Supervised Graph Learning.The paper proposes a method of using self-supervised graph contrastive learning to learn the multimodal feature representation of users and videos in a graph convolutional network model,and finally uses the designed multimodal fusion "expert"(specifically the specific modality name)module.Fusion of multimodal features.Experiments are carried out on the Movie Lnes-1M multi-modal movie dataset and Tik Tok’s multi-modal video dataset.Compared with the existing model methods,the experimental results show that the accuracy,recall and NDCG(Normalize Discounted Cumulative Gain)indicators are all good.Significant improvement.(2)In view of the problem that the existing video recommendation methods only model the interaction between users and videos,and ignore the problem that users have different interests and preferences,and only use general recommendation ranking indicators to evaluate the performance of recommendation models,this paper proposes A diverse video recommendation model based on user multi-interest is proposed.Based on the video recommendation research content of self-supervised graph comparison learning,this paper constructs a user multi-interest recommendation model according to the user’s historical behavior sequence,and designs a multi-interest extraction module to extract the user’s multi-interest representation in a multi-modal view.Finally,experiments are carried out on Movie Lens-1M,Tik Tok and Amazon datasets,and the models are evaluated with indicators such as accuracy,recall,NDCG and diversity.The experimental results show the effectiveness of this method.(3)Design and implement a video recommendation system based on graph convolutional network.Based on the research and implementation of the above video recommendation model,this paper uses python’s Django back-end framework and Vue front-end framework to implement a video recommendation system with frontend and back-end separation.The system includes functional modules such as user management,video management,recommended search and user interaction.Among them,the recommendation model of the above research is deployed in the system,which realizes the video recommendation function and recommends interesting and diverse video lists for users. |