| With the popularization of the Internet and the development of artificial intelligence,deep learning technology has been widely applied in various fields of people’s life.Current technologies mainly deal with intuitive,unconscious and fast tasks,such as image classification and recognition,text retrieval and so on.In cognitive neuroscience,we call solving these non-cognitive tasks System 1.How to accurately and timely solve conscious System 2 tasks,such as reasoning and System summary,which require logical and cognitive abilities,has become a major problem in the development of artificial intelligence.Furthermore,providing explainable information in System 2 will effectively solve the "black box" problem in DL and improve its robustness and reliability.The task is complex and the current work is not perfect.In this thesis,an end-to-end cognitive system of explainable reasoning is developed in view of the current reasoning objectives expected to be solved by System 2.First of all,this thesis proposes a new framework,syllogism-based reasoning algorithm(Syllogism-QA),for the complex question-and-answer scenarios in reading comprehension that require reasoning.By introducing logic syllogism principles into the encoding of text paragraph,this system can clarify the reasoning process of reading comprehension task.More specifically,a set of propositions represented by a group of sentences are explicitly incorporated as auxiliary information to the given QA text material into neural network encoder and a corresponding conclusion is then derived as the underlying reasoning evidence to support the rationality of the selected answer.The experiments show that the proposed Syllogism-QA can not only outperform the baseline algorithms,but also reveal the logical reasoning process behind obtaining the predicted results.Further,based on the current interpretation generation methods,this thesis carries out research on the algorithm of improving interpretability.Based on the three algorithms of hierarchical attention interpretation evaluation,comparison of changes of attention in the pre-trained inference model,and a posteriori interpretation of important symbols,explanations can be obtained from the reasoning process.At the same time,in order to evaluate metrics that can better reflect the accuracy of interpretation information,we propose and experiment two methods:the similarity between interpretation conclusions,the rationality and consitency of manual evaluation.And the robustness and reliability of interpretation can be evaluated from multiple perspectives.Finally,based on syllogism reasoning model of reading comprehension and interpretability enhancement algorithm,a comprehensive interpretable question-answer reasoning system is designed and developed in this thesis.The system preprocesses the input information,generates the predicted answers with the answer inference module,and generates the explanation information through the explanation generation module.This system combines the advantages of deep reasoning and explainability,which has a broad development prospect in the current mobile Internet era. |