Font Size: a A A

Research On Explainable Variational Graph Autoencoder Method Based On Non-information Prior Distribution

Posted on:2022-01-31Degree:MasterType:Thesis
Country:ChinaCandidate:L L SunFull Text:PDF
GTID:2480306329490604Subject:Software engineering
Abstract/Summary:PDF Full Text Request
The popularity of the Internet has facilitated the advance of complex networks.How to learn low-dimensional dense vectors to characterize nodes in a complex network for analysis has become a research focus.Variational graph autoencoder has gradually become one of the common methods for complex network characterization by its powerful generation ability.However,there are two challenges in the current research on variational graph autoencoder.1)Most of the current variational(graph)autoencoders and their variants suppose that the hidden variables are subordinated to a standard normal prior or a complex prior incorporating plenty of expert experience.However,in many actual situations,it may be unclear what form the prior should take or the prior probability is tough to gain.Hence,when the prior knowledge is insufficient,it is an arduous task that how to choose a rational prior.2)The graph neural network-based models have achieved great success in dealing with complex networks in recent years.However,such methods are usually a black box,making the learned low-dimensional representations unexplainable.Improving the explainability of the graph neural network-based models can help people increase their trust in the prediction results and correct system errors in the model in time.Therefore,how to improve the explainability of the model has increasingly become a hot scientific topic.Considering the above two challenges,the innovations of this paper mainly include the following:(1)A new explainable variational graph autoencoder model NPEVGAE is proposed,which relies on the non-information prior.It is the first time to leverage the idea of non-information prior to improve the lack of prior knowledge of hidden variables in the variational graph autoencoder.The NPEVGAE achieves the purpose of selecting a rational prior for hidden variables even in the absence of prior knowledge.It breaks through the unrealistic constraint that the posterior distribution of hidden variables must lean toward a standard normal prior,and adopts a non-information prior that does not change with the parameter forms as the prior probability of hidden variables.The model no longer encourages the learned hidden representations to gather at the origin but encourages the posterior to learn the parameters of the model from the samples,so it can make full use of the hidden space.Through detailed analysis,this paper certifies that the non-information prior will cause little interference to the learning of posterior probability parameters,thus explaining the rationality of the selection of the non-information prior.(2)The model NPEVGAE in this paper provides a new perspective to understand the latent representations of nodes and improves the explainability of the model itself.In NPEVGAE,one dimension of an embedding is treated as the soft allocation possibility that a node belongs to a block so that the embeddings become explainable.At the same time,a block-block relevance matrix is exploited to represent the intra-blocks and inter-blocks relations.When calculating whether two nodes have an edge,not only the embeddings are considered,but also the block-block relevance matrix.(3)Several advanced network representation learning algorithms are selected as comparison algorithms,and comparative analysis is carried out on two classical tasks.At last,visualization is leveraged to intuitively demonstrate that the NPEVGAE can differentiate diverse node categories with effect and fully utilize the latent space.
Keywords/Search Tags:Network Representation Learning, Variational Graph Autoencoder, Non-information Prior Distribution
PDF Full Text Request
Related items