Font Size: a A A

Knowledge-guided Natural Langauge Understanding System

Posted on:2022-01-22Degree:MasterType:Thesis
Country:ChinaCandidate:K Q HeFull Text:PDF
GTID:2558306914479804Subject:Electronic and communication engineering
Abstract/Summary:PDF Full Text Request
The dialogue system is a way of interaction similar to humans.Users can easily obtain various information or services by interacting with natural language,which has been widely used in practice.This paper studies the natural language understanding system in the task-oriented dialogue system,which represents the core basic ability of the entire dialogue system.The natural language understanding system aims at parsing and understanding natural language user queries and structuring human language into machine language.The accuracy of natural language understanding ability greatly affects the performance of the entire task-oriented dialogue system.The earliest natural language understanding system was implemented by rules and was designed based on expert knowledge.Therefore,the recognition accuracy was very low,and the generalization ability of the system was so poor.It could only be applied to some relatively simple scenarios.Deep learning methods have greatly improved natural language understanding systems and made significant progress in the technical field.Although the current deep learning methods have achieved great performance,there are still many challenges,such as data dependence problems,cold start problems,and model generalization.The essential reason for these problems is that the current deep learning methods rely too much on data and lack knowledge representation and reasoning capabilities.Therefore,this paper aims at the above problems and proposes a knowledge-guided natural language understanding system.The knowledge-guided natural language understanding system in this paper is based on deep learning,fusing the learning paradigm of data learning and knowledge representation and reasoning,solving the lack of interpretability of deep learning methods and the serious dependence on labeled data.Our method aims to improve the capability of neural network models in low-resource scenarios,especially the rare words and OOV(out of vocabulary)words,enhances the controllability of the model and the generalization ability of reasoning.This paper defines four forms of knowledge organization,including shared cross-language knowledge,label structure knowledge,large-scale commonsense knowledge,and language syntax knowledge.We try to explore the effects of knowledge utilization on the learning ability of neural network models from different dimensions.The main work and contributions of this paper are as follows:1)Recently conversational agents effectively improve their understanding capabilities by neural networks.Such deep neural models,however,do not apply to most human languages due to the lack of annotated training data for various NLP tasks.In this paper,we propose a multi-level cross-lingual transfer model with language shared and specific knowledge to improve the spoken language understanding of low-resource languages.Our method explicitly separates the model into the language-shared part and language-specific part to transfer cross-lingual knowledge and improve the monolingual slot tagging,especially for low-resource languages.To refine the shared knowledge,we add a language discriminator and employ adversarial training to reinforce information separation.Besides,we adopt novel multi-level knowledge transfer in an incremental and progressive way to acquire multi-granularity shared knowledge rather than a single layer.To mitigate the discrepancies between the feature distributions of language specific and shared knowledge,we propose the neural adapters to fuse knowledge automatically.2)Slot filling is a fundamental task of natural language understanding.Recent neural models achieve significant success with the availability of sufficient training data.However,in practical scenarios,entity types to be annotated even in the same domain are continuously evolving.To transfer knowledge from the source model pre-trained on previously annotated data,we propose an approach which learns label-relational output structure to explicitly capturing label correlations in the latent space.Additionally,we construct the target-to-source interaction between the source model and the target model and apply a gate mechanism to control how much information in the source model and the target model should be passed down.3)Neural-based context-aware models for slot tagging have achieved state-of-the-art performance.However,the presence of OOV(out-of-vocab)words significantly degrades the performance of neural-based models,especially in a few-shot scenario.In this paper,we propose a novel knowledge-enhanced slot tagging model to integrate contextual representation of input text and the large-scale lexical background knowledge.Besides,we use multilevel graph attention to explicitly model lexical relations.4)Slot filling and intent detection are two major tasks for spoken language understanding.In most existing work,these two tasks are built as joint models with multi-task learning with no consideration of prior linguistic knowledge.In this paper,we propose a novel joint model that applies a graph convolutional network over dependency trees to integrate the syntactic structure for learning slot filling and intent detection jointly.
Keywords/Search Tags:natural language understanding, cross-language transfer learning, label structure knowledge, commonsense knowledge, syntactic structure knowledge
PDF Full Text Request
Related items