| In recent years,with the development of big data and neural network,the research of open domain dialog system has been paid more and more attention.However,most of the current dialogue generation methods mainly focus on the quality of the reply content,ignoring the impact of emotion on the reply generation,but emotional expression can make the reply more natural and smooth.Therefore,it is very important to consider emotion in natural language dialogue system.In addition,when the traditional Seq2 Seq model based on RNN network generates the response,there will be problems such as unclear emotional expression,single reply content and low relevance.In view of the above problems,this paper conducts the following research:Firstly,aiming at the problem that the existing dialogue generation models ignore the modeling of global features in the process of context coding,this paper proposes a method of generating emotional conversations based on Transformer-L neural network.This method uses Transformer as an encoder to extract global semantic features of input sequences.Transformer,which is constructed by multi-head attention mechanism,can extract semantic features from multiple subspaces,and then get more comprehensive contextual semantic representation through weighted fusion of features from different subspaces.Then,by embedding designated emotion words into the dynamic attention mechanism,the model pays attention to the emotion information when decoding the feature learning,so as to generate the emotion response.In the experimental stage,the comparison between the proposed model and baseline model and the ablation experiment of Transformer encoder and emotion word embedding prove that Transformer encoder and emotion word embedding can effectively improve the emotion accuracy and content relevance of specified emotion response generation.Secondly,aiming at the problem that the existing dialogue generation model is easy to generate general reply,an emotional dialogue generation model based on generative confrontation network is proposed.The generator adopts the seq2 seq model based on GRU,embeds the emotion word vector during decoding,urges the model to generate emotion reply,and introduces the random noise common input discriminator.Through pre training strategy and reinforcement learning strategy,the generator can get rid of the local optimal solution problem caused by its objective function in the training of alternating iterative confrontation with discriminator.Through the ablation contrast experiment of the model and the selection of optimal model parameters,the final experiment proves that the embedding of emotion words and antagonistic training can reduce the probability of safe response generation and improve the emotion accuracy of the specified emotion response generation. |