Font Size: a A A

Research And Implementation Of Interpretability Technology For Graph Mechine Learning Algorithm

Posted on:2022-07-28Degree:MasterType:Thesis
Country:ChinaCandidate:Y F LiuFull Text:PDF
GTID:2480306338468524Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Graph structure data has a wide range of applications in real life.People can model the rich association relationships into graph models.Graph machine learning algorithms make full use of the structure information to mine more valuable information,providing more accurate results.Although graph machine learning algorithms use the dependency between nodes to assist decision-making,the complex structure of the graphical models also increases the difficulty of the interpretability technology of graph machine learning algorithms.Existing graph model interpretability algorithms fail to fairly attribute the decision results to the factors which participate in the decision.Most interpretability algorithms are devoted to simplifying calculations,ignoring the counterfactual reasoning from the perspective of human perception.In order to solve the above problems,this paper first proposes a Shapley-based graphical model interpretability method,which introduces Shapley values first proposed in game theory.We define the probability contribution and topological contribution on the graphical models,and propose an efficient approximate calculation method for Shapley values on the graphical models,which evaluate the contribution of nodes fairly and effectively.We propose meta-explanations for Shapley values to prove the validity and understandability of the interpretation results.We also propose a counterfactual-based graphical model interpretability method,and two metrics are proposed for counterfactual explanations,which are simulatability and counterfactual relevance.Based on this,a graph-based counterfactual explanation form is designed.A two-objective optimization problem is used to search for explanations,and the evaluation indicators for the robustness of explanations are also proposed,so as to find an explanation that is completely consistent with the human cognitive process.The experimental results show that the two algorithms can explain the prediction results of the graph model machine learning algorithm well,and can well complete the interpretability tasks in different fields.At the end of this paper,an interpretability system based on the graphical model is implemented.The system realizes a fully automated process from data processing,model training,model interpretation and result display,and can interact with users to adjust algorithm parameters to adapt to different interpretability tasks.
Keywords/Search Tags:Probabilistic Graphical Model, Graph Neural Network, Shapley Value Explanations, Counterfactual Explanation
PDF Full Text Request
Related items