Font Size: a A A

Knowledge Enhanced Pretrained Model For Personalized Rating Prediction And Explanation Generation

Posted on:2024-01-05Degree:MasterType:Thesis
Country:ChinaCandidate:Q X WangFull Text:PDF
GTID:2568307052995819Subject:Electronic information
Abstract/Summary:PDF Full Text Request
As a classic task of recommender systems,rating prediction has always received extensive attention from researchers.As the number of users of ecommerce platforms continues to increase and usergenerated reviews continue to accumulate,related research introduces review texts based on user IDs and product IDs to improve the effect of rating prediction.In recent years,a large number of pretrained models based on multilayer stacked Transformer structures have emerged,and thanks to the continuous improvement of GPU computing power,these largescale pretrained models with deep layers and many parameters have performed well in textrelated tasks.performance.Recently,how to apply pretrained language models to the field of recommender systems is attracting more and more researchers’ attention.However,existing research mostly ignores the exploration of the following two aspects:(1)how to model the rich knowledge contained in the finegrained aspects of user reviews and their associated knowledge graphs on the basis of pretrained models;(2)how to incorporate users,items,ratings,and aspects as cues into a pretrained model helps generate explanatory text.For the first question,rating prediction based on personalized reviews aims to use textual information in existing reviews to model users’ interests and item characteristics for rating prediction.Most of the existing research focuses on the direct modeling of unstructured text.However,they have two main problems.First,the rich knowledge contained in the finegrained aspects of each review and its associated knowledge graph is rarely considered to complement plain text for better modeling of useritem interactions.Second,the power of pretrained language models has not been carefully studied in the downstream task of rating prediction based on personalized reviews.In this paper,we propose a method named Knowledgeaware Collaborative Filtering(KCFPLM)based on pretrained language models to address these two problems.Specifically,to exploit the rich knowledge,we extract aspects from reviews,construct a heterogeneous weighted knowledge graph,and use a Transformer network to model these aspects with respect to interactions between a given useritem pair; Additionally,to represent users and items,we take all historical reviews of users or items as input to a pretrained language model.KCFPLM tightly integrates Transformer networks and pretrained language models through a combination of representation propagation and personalized attentionbased aspects on the knowledge graph.Therefore,KCFPLM combines review texts,aspects,knowledge graphs,and pretrained models into a unified model.We conduct comprehensive experiments on several public datasets,demonstrating the effectiveness of KCFPLM and its main components.For the second question,the goal of personalized explanation generation is to generate reasons for recommending the item to the user by jointly modeling the user and the item.Most of the previous studies have adopted the encoderdecoder architecture,encoding attributes such as user and item IDs as vectors,inputting them into the decoder,and then translating them into interpretive texts.In fact,user and item IDs are important in recommender systems.Identifier,there is a semantic gap between the text to be generated.Moreover,in real recommendation scenarios,explanation generation often faces the problem of insufficient prompt generation information.In addition,pretrained language models have excellent performance on text generation tasks,but due to their complex structures and largescale parameters,few studies have explored tailoring them for recommendation explanation generation tasks.Inspired by recent progress in Prompt Learning,we propose a personalized promptbased learning explanation generation model PPLGen:by designing personalized prompt templates,the pretrained model can be adapted to the recommendation explanation generation task; For the semantic gap between words,an embedded Prompt is designed; for insufficient prompt information,we predict the user’s rating of the item and the set of interesting aspects as supplementary information in the pretask.Experimental results on multiple datasets demonstrate the superiority of the PPLGen model.
Keywords/Search Tags:Review-based Rating Pre-diction, Pretrained Language Model, Knowledge Graph, Explanation Generation, Prompt learning
PDF Full Text Request
Related items