Font Size: a A A

Research On Node Classification Algorithm Based On Network Representation Learning

Posted on:2022-11-30Degree:MasterType:Thesis
Country:ChinaCandidate:S C ChenFull Text:PDF
GTID:2480306752465344Subject:Automation Technology
Abstract/Summary:PDF Full Text Request
With the gradual development of artificial intelligence,network representation learning has been widely used in a variety of real applications.However,in this Internet era of massive data explosion,the traditional simple representation learning algorithm cannot meet the needs of processing large-scale and multi type data.It is urgent to improve the performance of the model to adapt to the changeful data.In recent years,the emerging network representation learning algorithms have generally achieved excellent results in practical tasks such as link prediction,network reconstruction and node classification.However,there are still some problems.For example,the learning of network structure features is not accurate enough and we still lack of joint optimization of attribute features and structure features.Based on the traditional static network model,the paper designs two network representation learning algorithm models in order to apply to node classification.The main innovative work lists as follows:1.We propose a structural deep network embedding model based on node similarity.On the basis of the first-order and second-order similarity,we also design a framework of highorder loss function.The similarity of all node pairs is calculated according to the overall structure of the graph network.The problems of the traditional algorithm that only computes the similarity by the direct edge and local neighbors between node pairs are optimized;In addition,the paper selects the Adam optimizer and Re LU activation function to solve the gradient disappearance problem in the process of back propagation.It can also successfully avoid the pre-training process of deep belief network.Taking Katz similarity algorithm as an example,the paper establishes a simple model.It solves the problems of the original model during the process of calculation and illustrates the necessity of introducing high-order loss function.At the same time,we establish the framework of high-order loss function and apply a variety of basic similarity calculation methods to the loss function framework.Horizontal comparison experiments show that the framework is generally effective in improving the performance of our model.2.We propose a combined model of graph attention network and Auto-encoder.Through the joint optimization of graph attention network and Auto-encoder,the model can capture the network according to the node attributes information,local structure and global structure.The graph attention network component introduces the attention mechanism based on the graph convolution network.During the process of traversing the first-order neighbor nodes,it assigns corresponding weights to all nodes.The graph attention network focuses on learning the attribute characteristics of network nodes and inputs the representation vectors into the Autoencoder component.Inspired by the first-order loss function and second-order loss function of the structure deep network embedding model and an innovative similarity method,the optimized Auto-encoder component learns the characteristics of the network structure.The proposed similarity loss function integrates the concepts of high-order path,common neighbor and preference connection to improve the accuracy of the network structure similarity calculation.Furthermore,we use standardized operation for scoring and deduces the formula of the similarity calculation method.The link prediction experiments prove the effectiveness of the method.Two different connection components are used to achieve the balance between capturing network structure characteristics and node attribute characteristics.Through the above work,this paper successfully designs two optimized network representation learning models,which improves the accuracy of node classification.They can also generate better visual graphs than the original models.
Keywords/Search Tags:Network Representation Learning, Auto-Encoder, Loss Function, Attention Network, Similarity Index
PDF Full Text Request
Related items