Font Size: a A A

Dynamic Network Representation Method Based On Graph Neural Networks

Posted on:2024-06-15Degree:DoctorType:Dissertation
Country:ChinaCandidate:X JiangFull Text:PDF
GTID:1520307292497394Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Dynamic networks are often used to describe complex systems in which various states change over time.They have been widely applied in several key fields of natural science,such as biology,physics,sociology,and computer science.Exploring the structure and attribute evolution of dynamic networks is also an important research direction in the field of complexity science.Especially with the tremendous success of machine learning methods in the field of image and text data mining,there has been a research boom in applying deep learning models to dynamic network data.At the theoretical level,the strong fitting ability of deep learning models for data distribution expands the limitations of traditional network dynamics models for fitting different evolution laws,and enables deeper exploration of the operational mechanisms of complex systems.At the application level,deep learning models based on graph neural networks have also achieved significant results in specific temporal link prediction and time attribute prediction tasks,such as online recommendation systems,social behavior prediction,and network traffic prediction.Therefore,studying dynamic network representation methods based on graph neural networks has significant theoretical research value and outstanding application value.Although graph neural networks have shown great potential in capturing complex structural information and learning effective node representations in static networks,due to the random connection patterns,unstable spatiotemporal distribution,and spatiotemporal correlations of node attributes in dynamic networks,the application of graph neural networks in dynamic networks faces new challenges.Building upon the aforementioned challenges,this thesis addresses the problem through the use of inverse reinforcement learning(IRL)and self-attention mechanisms as the primary approaches for dynamic network analysis.Firstly,a Markov Decision Process is employed to model the continuous occurrence of links in dynamic networks.By incorporating the neighborhood structure convolution and a learnable reward function,the IRL agent is provided with stable local topological features,enabling it to learn node-to-node connectivity strategies that closely resemble real data.Secondly,a link representation learning method and a node attribute learning method are proposed,both utilizing self-attention modules as their core components.The key research components are outlined as follows:To address the characteristic probabilistic connection patterns between nodes in dynamic networks,a dynamic network inverse reinforcement learning framework(DN-IRL)is proposed to learn the node connection strategies from real dynamic network data.Firstly,links within a fixed time range and links appearing at the next time step in the real dynamic network data are respectively treated as expert behavior demonstration sets consisting of continuous environmental states and actions.Then,the learnable reward function is optimized by maximizing the expected cumulative reward of the expert behavior collected from the original dynamic network,which is used to train the agent’s policy.As the agent’s expected cumulative reward becomes closer to the expert’s expected cumulative reward,the learned policy approaches the node connection strategy of the real dynamic network.Furthermore,a neighborhood structure based on node embeddings is proposed to significantly enhance the agent’s sensitivity to changes in network structure and improve the accuracy of the model in predicting new links.In response to the challenges of non-linear temporal sparsity,weak sequential correlation,and discontinuous structural dynamics in the unstable spatio-temporal distribution of dynamic network data,a link sequence to link sequence model based on self-attention mechanism(DNformer)is proposed.Firstly,the dynamic network is segmented into multiple link sequences within continuous time periods,preserving the temporal and structural correlations between the input and output network slices.Then,node cluster encoding and link similarity encoding between links are designed to capture the changing structures within consecutive link sequences,enabling the model to perceive the importance and relevance of links.Finally,a parallel multihead self-attention mechanism is employed to capture the potential diverse structural evolution patterns between the input and output link sequences.In addition,a measure of structural similarity is introduced in the loss function to quantify the structural differences between link sequences,thereby improving the predictive performance of the model.A traffic flow prediction model,called TPformer,based on self-attention mechanism is proposed to address the challenges of non-stationarity in traffic flow data and the spatiotemporal correlation of node flow attributes in traffic network prediction tasks.Firstly,node triplets encoding is utilized to assign additional spatial context structural features to each node,and parallel multi-head self-attention layers are employed to capture the spatiotemporal correlations of node-node structure and attributes.Furthermore,the topological features of the traffic network are incorporated into the self-attention mechanism,enabling the model to pay more attention to the flow attributes of neighboring nodes at the current time step when predicting the flow characteristics of the next time step,thus achieving more accurate node flow prediction.Additionally,a region flow loss function is designed to enhance the model’s focus on complex nodes with regional connectivity,improving the learning ability for the evolution of node flow attributes under complex traffic conditions.
Keywords/Search Tags:Dynamic Network, Time Series Link Prediction, Graph Neural Network, Inverse Reinforcement Learning, Self-Attention Mechanism
PDF Full Text Request
Related items