Font Size: a A A

Network Representation Learning For Social Computing

Posted on:2019-04-29Degree:DoctorType:Dissertation
Country:ChinaCandidate:C C TuFull Text:PDF
GTID:1360330590451477Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
How to represent vertices in networks plays important role in the fields of data mining and social network analysis.With the advent of large-scale social networks,typical network representation methods usually suffer from the issues of computational efficiency and interpretability.Besides,these social networks always contain abundant heterogeneous information.These characteristics make existing methods unsuitable to handle the large-scale social networks.Network Representation Learning(NRL),i.e.,Network Embedding(NE),aims to learn a real-valued low-dimensional representation vector for each vertex.These representation vectors contain the network structure and other heterogeneous information of vertices,and are usually treated as features in further network analysis tasks,including vertex classification,link prediction,community detection,and so on.To address the computational efficiency and interpretability issues of existing NRL methods,we propose to learn explicit and implicit network representations to improve the performance of network analysis tasks.To learn explicit network representations,we conducted the following works:(1)Lexical item-based explicit network representation.To improve the performance of vertex classification,we present a cascaded two-level classification framework with community refinement to incorporate the heterogeneous text information and network structure information of users.The proposed model achieves promising performance in profession identification.(2)Tag and topic-based explicit network representation.To address the interpretability issue,we employ the explicit tags to represent user vertices and exploit the correspondence between tags and social behaviors for user tag suggestion.Although the explicit representation is interpretable,it suffers from the computation efficiency issue.Motivated by the success of representation learning in images,speech,and natural language,we propose a series of NRL works to learn implicit low-dimensional representations of vertices.These works include:(1)Max-margin implicit network representation.We propose Max-Margin DeepWalk(MMDW)to learn discriminative network representations and improve the performance of vertex classification,by training the max-margin classifier and NRL model jointly.(2)Context-aware implicit network representation.We propose Context-Aware Network Embedding(CANE)to learn the dynamic embeddings of a vertex according to the neighbors it interacts with.By employing the mutual attention mechanism,CANE significantly improves the performance of link prediction.(3)Social relation extraction based implicit network representation.We propose a novel translation-based NRL model,TransNet,to model the relations between vertices.With the consideration of the semantic labels on edges,TransNet outperforms existing methods on social relation extraction task.(4)Community-enhanced implicit network representation.To integrate the global community pattern in social networks,we utilize the analogy between topics in text and communities in networks,and propose Community-enhanced NRL(CNRL)model to learn vertex representations and detect communities simultaneously.
Keywords/Search Tags:Network Representation Learning, Network Embedding, User Profiling, Tag Suggestion, Vertex Classification, Link Prediction, Community Detection
PDF Full Text Request
Related items