| With the rapid development of machine learning technology,the predictive ability of machine learning models has been improved significantly,which greatly improves the work efficiency of human beings.Therefore,machine learning models are widely applied in various domains,such as recommendation systems,image recognition,medical diagnosis,etc.With the improvement of predictive accuracy,the internal structures of machine learning models become more and more complex,and make it difficult for users to understand their operating mechanism,which means machine learning models lack of interpretability.Interpretability is the ability to explain or to present in understandable terms to a human.Interpretability can not only help creators to improve the prediction performance,but also help to tackle the trust,ethics,fairness,privacy,security,responsibility and other problems encountered in the application,and even helps users to discover new knowledge.Therefore,interpretability research is very important.Feature interaction refers to any non-additive form.Compared to feature importance,feature interaction can reveal the additive structure of complex predictive models.The literature on feature interaction mainly focuses on detecting their existence and calculating their strength.Little attention has been given to internal structure of feature interactions.In order to gain more insight of the internal structure of complex predictive models,this paper takes feature interaction as the research object.And the research contains three parts:the measurement and detection of feature interaction,the recognition of the interactive form of product separable feature interaction,and the analysis of the importance of the feature in the feature interaction.The target is to improve the interpretability of the model and enhance users’ trust in complex predictive models.The main contributions of this paper are as follows:Firstly,a new measurement and detection method of feature interactions is proposed,which directly reflects the strength of feature interaction and can detect k-way(k≥3)feature interaction.Based on High Dimensional Model Representation,this paper proposes a new feature interaction measurement.It can not only directly reflect the utility of feature interactions,but also can provide explanations complex predictive models or samples together with feature importance.In this paper,A necessary and sufficient condition for the non-existence of feature interaction is proposed,and the theoretical proof has completed.According to this theorem,this paper proposes a method of feature interaction detection which can search feature interaction efficiently,reduce the complexity of computation,and realize the goal of detecting higher dimensional feature interaction.This paper proposes a new visualization method to reflect feature interactions and their strength.In the synthetic dataset experiments,this method can accurately detect the feature interactions and no spurious feature interactions are detected.In the real dataset experiments,adding the detected feature interaction to the original dataset can improve the prediction accuracy of the model,which can support the improvement of complex predictive model and increases the users’ trust in the prediction model.Secondly,this paper has proposed a recognition of interactive form of product separable feature interactions,which helps users gain insight of the internal structure of feature interaction,and verify the authenticity of detected feature interactions.Since both product structure and additive structure can reduce the dimension of complex predictive model,this paper studies the internal structure of feature interactions.Referring to the recognition method of product separability in differential equations,this paper uses ratio instead of partial derivative to recognize product separability,and proposes a recognition method of product separability in complex predictive models.When the feature interaction is completely product separable,we will explore the interactive form via visualization or polynomial fitting,this paper proposes a method to explore the interactive.Based on the product separability recognition results,a visualization method is proposed to reflect the additive structure and product structure of complex predictive models.In synthetic dataset experiments,the method can accurately recognize the product separable feature interaction.In the real dataset experiments,adding the feature interaction to the original dataset in the interactive form recognized by this method can improve the performance of predictive models,support the improvement of complex predictive models,and increase the trust of users.At the same time,it can also verify whether the feature interaction exists or not.Thirdly,this paper provides the evaluation method of the importance of features in feature interaction,which helps users to preliminarily gain insight of the internal structure of feature interaction.Referring to the definition of feature importance,this paper has proposed the definition of the importance of the feature in feature interaction,and uses elasticity to measure.Then,according to the results of the importance of features in the feature interaction,a new assignment of feature interaction is proposed to generate a new feature importance.A visualization method is proposed to reflect the importance of features in feature interaction.This method can provide explanations for all feature interactions,and make up for the shortcoming that the second method can only explain the product separable feature interactions.The importance method of feature in feature interaction preliminarily reflects the contribution of each feature in feature interaction to feature interaction strength,and provides a reasonable assignment of the strength of feature interaction.This paper mainly focuses on feature interaction.Through measuring the strength and detecting the existence of feature interaction,recognizing the interactive form of the product separable feature interaction,evaluating the contribution of the feature in the feature interaction,we can gain more insight of the internal structure of the complex predictive models.It cannot only improve the interpretability of complex predictive models,but also can improve the predictive performance. |