| Emotional dialogue system is an important research branch in human-computer dialogue,and its goal is to integrate emotional information in dialogue systems to generate responses with appropriate emotions.Emotional dialogue changes traditional human-machine dialogue model,so that human-machine dialogue is no longer icy,making dialogue more meaningful.The application of emotional dialogue system is very wide,which can appear in the field of psychological consultation,early childhood teaching,entertainment and leisure,improving the user experience.With the advent of the era of big data and the rapid development of deep learning technology,the generative dialogue system based on large datasets and deep learning technology has become the current research mainstream.By training large amounts of data to learn human conversation patterns,the model automatically generates responses.Aiming to generate semantic coherent and emotional appropriate responses in emotional dialogue systems,this thesis utilizes hierarchical neural network structure and self-attention enhanced mechanism to conduct researches from the perspectives of historical content of multi-turn dialogues,capturing emotional information,extracting dialog themes.Specifically,the main research content and innovation points of this article are as follows:(1)We propose an emotional dialogue generation model based on self-attention mechanism.Most of the existing emotional dialogue generation models take single-turn dialogue as the research object,which is inconsistent with reality.As the number of dialogue rounds increases,users’ emotions will also change dynamically.Moreover,in some special Chinese contexts,such as rhetorical questions and irony,users’ real emotions are usually hidden in semantics and cannot be judged only by emotional vocabulary.This paper proposes an emotional dialogue generation model based on selfattention mechanism.The model takes into account contextual content and potential emotional information in multi-turn conversations to generate higher quality emotional responses.In order to better deal with long-term conversation content,the model adopts the hierarchical recurrent neural network structure to process context content layer by layer.And to avoid mutual interference between information,the model uses two parallel self-attention enhanced encoders.The model uses semantic representation to update the potential emotional representation,and finally fused them to complete the final response generation.Experiments on public datasets show that the model can generate emotional appropriate responses in multi-turn dialogues and improve user experience.(2)We propose an emotional dialogue generation model incorporating thematic information.Current models of affective dialogue generation pay more attention to integrating emotion into response generation process,but ignore semantic content.In order to enhance the quality of semantic content in response generation,this paper proposes an emotional dialogue generation model incorporating thematic information,which can enhance the relevance between response content and context.Firstly,the model trains the topic module to obtain the input topic probability distribution and codes it into vector representation at the context level.Then in the independent coding module,recurrent neural network is used to capture semantic signals and emotional signals respectively.In the fusion module,theme information is used to enhance the representation ability of semantic vector.Finally,in the generation module,the information representation learned in the context is input into the generator to generate semantic coherent and emotional appropriate responses.Experimental results on a publicly available dataset show that the model can generate emotional responses that are more appropriate to the current conversation content than the baseline models.We also analyze the effects of different context lengths and different modules on the performance of the model through experiments,and prove the robustness of the model and the effectiveness of each module. |