Font Size: a A A

Improvement Of Prompt Algorithm Based On Graph Model

Posted on:2024-05-05Degree:MasterType:Thesis
Country:ChinaCandidate:J G ChenFull Text:PDF
GTID:2557307079961469Subject:Statistics
Abstract/Summary:PDF Full Text Request
The Prompt algorithm has gradually become a research hotspot in the field of natural language processing(NLP)due to its excellent performance in low-resource scenarios such as few-shot/zero-shot learning.However,traditional Prompt algorithms suffer from problems such as sensitivity to prompt templates and label prediction slot results,and inconsistency in semantic similarity values between label words and various prompt words when computing model losses.In this paper,these issues are mitigated by constructing a knowledge-enhanced prompt graph and introducing external information,achieving good results.The main improvements and contributions of this paper can be summarized as follows:Firstly,an in-domain pre-training method based on masked language model(MLM)and simple contrastive learning of sentence embeddings(Sim CSE)is proposed,which is called IPMS.Specific improvements include:(1)After general pre-training models,the IPMS algorithm performs in-domain pre-training on downstream task datasets to improve the model’s familiarity with downstream tasks?(2)Using MLM and Sim CSE as the pre-training algorithm architecture to mitigate the anisotropy problem of word and sentence vectors obtained by traditional models?(3)By preprocessing the data,the Prompt func-tion is integrated into the pre-training corpus to improve the model’s familiarity with the Prompt function.Secondly,a knowledge-enhanced prompt graph is established to model the seman-tic relationship between label words and prompt words.Specific improvements include:(1)The prompt-based label embedding(PBLE)algorithm is proposed,which introduces more context information to “isolated” label words through the Prompt function,making the obtained label word vectors more accurate and alleviating the problem of polysemy?(2)Using the bidirectional encoder representation from Transformers(BERT)model to vectorize prompt words,a knowledge-enhanced prompt graph is constructed with label words and prompt words as nodes,and the semantic similarity between label words and prompt words as edge weights,thus introducing more external prior knowledge informa-tion to the model.Thirdly,a Prompt algorithm based on knowledge-enhanced prompt graph(PABG)is proposed.Specific improvements include:(1)Extracting positive and negative prompt words from the knowledge-enhanced prompt graph as external prompt information to as-sist in model prediction?(2)When calculating model losses,corresponding rewards are given according to the difference in semantic similarity values between label words and different prompt words,thereby encouraging the model to learn more semantic informa-tion.The final experimental results show that the proposed method in this paper achieves the best performance on multiple public datasets,and also achieves the best performance in few-shot/zero-shot scenarios.In addition,ablation experiments on the main innovations of this paper also verify the effectiveness of the proposed method.
Keywords/Search Tags:Natural language processing, Prompt algorithm, In-domain pre-training, Knowledge-enhanced prompt graph
PDF Full Text Request
Related items