| Graph representation learning,as a technical means to effectively extract network features,has been widely used in the analysis of various network systems,such as traffic prediction in traffic networks,fraud analysis in financial networks,and drug development based on molecular networks.Most current research works on graph representation learning methods are designed for static networks.However,networks naturally exhibit dynamic changes in real-world scenarios,i.e.,their topologies change over time.This poses many problems and challenges to existing graph representation learning methods.For example,dynamic networks contain not only the topological information of the network but also the temporal evolution of the network,while most graph representation learning focuses only on the topological information of the network.In addition,dynamic networks contain a lot of higher-order structural semantic information that can help explore the complex and diverse evolutionary patterns in dynamic networks,and how to effectively extract higherorder structural semantic information is also a pressing problem for graph representation learning.Moreover,since dynamic networks require high information capture capability of models,such models usually have problems of high complexity and many parameters.Therefore,in this thesis,we conduct an in-depth study of the above problems,aiming to improve the performance of graph representation learning in dynamic networks and realize the lightweight of graph representation learning models.The main research contents and contributions of this thesis are as follows:(1)A temporal edge-aware hypergraph convolutional network approach is proposed.To address the problem that existing graph representation learning for static networks can hardly obtain sufficient information about the time evolution information of dynamic networks,a temporal hypergraph construction method is designed so that the local structure information and temporal evolution information in dynamic networks can be represented on the hypergraph simultaneously.In addition,a hyper-edge projection operator is introduced to enhance the model’s ability to perceive the temporal evolution information.Finally,a temporal edge-aware hypergraph convolution method is proposed,which contains both intra-hyper-edge information aggregation and inter-hyper-edge information aggregation to enable information aggregation and transfer of nodes on the temporal hypergraph.Experimental results on five public dynamic network datasets showed that the method significantly outperformed existing graph representation learning methods.(2)A temporal group-aware graph diffusion network method is proposed.To address the problem of obtaining higher-order structural semantic information in dynamic networks,a matrix for describing group interactions in networks,called the group affinity matrix,is designed to effectively characterize the higher-order structural semantic information of dynamic networks.In addition,since dynamic networks have diverse evolutionary patterns,a graph transformer network is introduced to model the temporal evolutionary information of dynamic networks to enhance the adaptivity and interpretability of the model.At the same time,a temporal embedding is also proposed for enhancing the perception ability of the graph transformer network concerning temporal order.The method achieved better performance compared with eight state-of-the-art graph representation learning methods.Moreover,by visualizing the temporal weights of the model,it can be observed that the model has a strong adaptive capability,which also provides a certain degree of interpretability of the prediction results.(3)A model compression method based on dual contrastive distillation is proposed.For the problem that models under dynamic networks have high complexity and large parameter size,a method for compressing graph representation learning models called dual contrastive distillation is proposed.In the dual contrastive distillation approach,an encoder-level contrast loss function is designed to transfer knowledge from the powerful and complex teacher model to the lightweight and efficient student model based on the teacher-student model framework.In addition,a local structure-level contrast loss function based on mutual information theory is also introduced to compensate for the information loss in the knowledge transfer from the teacher-student model,which effectively distills the local structure information from the student model itself.In the results of performance comparisons with four other advanced knowledge distillation techniques,the method showed significant performance advantages and even outperformed the performance of the teacher model on some datasets. |