Font Size: a A A

Research On Adversarial Robustness Of Graph Neural Network

Posted on:2024-02-24Degree:DoctorType:Dissertation
Country:ChinaCandidate:M M ZhangFull Text:PDF
GTID:1520306944966539Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Graph,as a data structure for representing relationships between objects,is ubiquitous in the real world,including social graphs,protein graphs,communication graphs,molecular graphs,and more.Graph machine learning,aiming to leverage machine learning techniques to extract hidden information from graphs and improve application performance,has received extensive attention.Among them,graph neural networks(GNNs),guided by the principles of deep learning models,can capture the complex nonlinearities of graphs.With their outstanding performance,GNNs have emerged as the leading paradigm for graph analysis.Unfortunately,recent studies have revealed that deep learning models lack adversarial robustness,showcasing performance degradation under imperceptible adversarial perturbations.Similarly,graph neural networks,as an extension of deep learning on graph data,are susceptible to adversarial attacks and exhibit unique vulnerabilities.This issue is particularly concerning since the network data displays a high level of openness,making it a prime target for attacks.Such vulnerabilities can result in significant economic losses for individuals or companies,as well as pose serious threats to numerous social public services.Therefore,it is crucial to explore the adversarial robustness of GNNs and develop trustworthy GNN systems.Yet,the investigation of adversarial robustness against real graph data encounters following challenges:1)Real graph data exhibits heterogeneity,involving multiple types of node or edge.Existing adversarial robustness investigations based on homogeneous graph neural networks,fail to adapt to real heterogeneous graph data.2)Real graph data often exhibits high openness and can be modified easily,such as systems that relying on user-reported label data or privacy-presearving systems that directly receive local model parameters uploaded by clients.Existing topology/feature attacks are not applicable to diverse attack targets like labels and parameters.3)The robustness of different nodes in graph data varies,and existing attacks allocate attack budgets using heuristic methods.It is difficult to accurately estimate the required budget for successfully attacking each node.To address the above challenges,this paper conducts an in-depth investigation into the adversarial robustness of GNNs,exploring and resolving the design flaws and security risks.The main contributions and innovations of this thesis are shown as follows:(1)In light of the challenge arising from heterogeneous graph data,we investigate the adversarial robustness of heterogeneous GNNs,and find the problems of perturbation enlargement and soft attention mechanisms in the aggregation paradigm of heterogeneous GNNs,leading to unique adversarial vulnerabilities.So we propose a robust heterogeneous GNN framework,which mitigates the issue of perturbation enlargement by introducing prior transition probabilities and removes unreliable neighbors based on a purifier.Experimental results demonstrate the existence of unique adversarial vulnerabilities in heterogeneous GNNs,and the proposed defense framework significantly improves the adversarial robustness.(2)Regarding the challenge posed by the diverse attacks(e.g.,label attacks)arising from the openness of real graph data,we investigate the adversarial robustness of GNNs against label attack.We propose an effective attack model based on approximated closed form of GNNs and continuous surrogate of non-differentiable objective,efficiently generating attacks via gradient-based optimizers.Furthermore,we show that one key reason for the vulnerability of GNNs to label-flipping attack is overfitting to flipped labels.Based on this observation,we propose a defense framework which introduces a communitypreserving self-supervised task as regularization to avoid overfitting.The experimental results demonstrate GNNs are vulnerable to label attacks,and our defense can solve it with considerable improvement.(3)Regarding the challenge posed by the diverse attacks(e.g.,parameter attacks)arising from the openness of real graph data,we investigate the adversarial robustness of GNNs against parameter attack.Considering the dependency of GNN-based federated recommendation systems on local model parameters for privacy protection,this study analyzes their adversarial robustness against parameter attacks.Leveraging their unique sparse parameter aggregation mechanism,we propose a series of efficient attack strategies.Experimental results demonstrate that the proposed attacks can hinder the convergence of federated GNNs by controlling only a small number of users.(4)To tackle the challenge of estimating the attack budget due to the varying robustness of nodes,we propose a novel minimum-budget topology adversarial attack model.The proposed model aims to adaptively find the minimum perturbation budget for each node to achieve successful attacks,breaking the inevitable dilemma of effectiveness and invisibility of attacks caused by the fixed budget.We utilize a differentiable dynamic projection gradient descent module to transform the complex non-convex constrained optimization problem into a continuous convex constrained optimization problem.Experimental results demonstrate that the minimum-budget topology attack can achieve successful attacks with minimum perturbed edges.
Keywords/Search Tags:Graph Neural Network, Adversarial Robustness, Adversarial Attack, Heterogeneous Graph, Recommendation System
PDF Full Text Request
Related items