| The goal of the text generation task is to generate text that conforms to syntax rules and does not lack semantic information.It is a research interest in the sphere of natural language processing with wide range of potential application scenarios.How to use deep neural network models to generate high-quality natural language sentences destined to become a hot research topic.Answer generation is an important orientation in text generation,which requires machines to automatically generate answers to questions according to known questions.With the accumulation of trainable data and the development of hardware computing power,the encoder-decoder neural network model based on deep learning has become the mainstream model for question and answer generation.In recent years,Variational Auto-Encoder(VAE)and Conditional Variational Auto-Encoder(CVAE)is introduced into the generative model to encode texts in order to utilize prior knowledge and mine the hidden association information in the data.The hidden variable obtained based on conditional variational auto-encoder only contains the local information of the text,which makes the decoder unable to take full advantage of the complete information of the text.Some studies have shown that the hidden variable obtained by the variational auto-encoder is more inclined to remember the first part of the text and its length,and including only limited local features.In order to alleviate this situation,this paper proposes an optimized conditional variational auto-encoder ERECVAE model.During the training process,it uses contrastive learning to generate positive and negative samples to optimize the representation information of latent variables so that it contains the global information of the text,thereby generating more relevant answer.On the other hand,traditional answer generation model reckon without the emotional information in the dialogue process.In this paper,emotional discrimination is introduced to maintain the consistency of emotions.The model can get answer related to the above and contain emotions during the generation process.Experimental results on English datasets DailyDialog and PersonaChat illustrate the effectiveness of the ERECVAE model.This paper further research the question answering application in the actual field,and finds that the main problem is:the number of samples in the field application is small,and it is impossible to obtain a better-performing model through the existing small number of sample training.At the same time,the quality of the answer generated by using the generative model cannot be guaranteed.The goal of this paper is for the application of question answering in the field of electric power:application exploration and performance analysis of the main paragraph retrieval,extractive machine reading comprehension model and the optimized conditional variational auto-encoder answer generation model proposed in this paper.Experiments show that among the three algorithms RocketQA,TFIDF and BM25 for paragraph retrieval,TFIDF is superior to the other two algorithms,with a retrieval accuracy of 97.4%,and the time required for retrieval is also at the millisecond level,which meets the needs of actual application scenarios;Using the DuReaderrobust based reading comprehension model for answer extraction,the results after fine-tuning are also improved compared to those without fine-tuning;the performance of the optimized conditional variational auto-encoder generated answer template model after data augmentation and fusion pointer generation network has been significantly improved. |