Font Size: a A A

Model Heterogeneity And Security Of Federated Graph Neural Networks In Graph Classification

Posted on:2023-04-08Degree:MasterType:Thesis
Country:ChinaCandidate:J H XieFull Text:PDF
GTID:2558307070483184Subject:Engineering
Abstract/Summary:PDF Full Text Request
Graph Neural Networks(GNNs)have succeeded in many fields with their powerful graph data processing capabilities.However,it isn’t easy to collect graph-structured data from different institutions and use GNNs for centralized training due to privacy concerns and regulatory constraints.As a solution,Federated Graph Neural Network(Fed-GNN)does not need to share data.Still,it supports multi-party collaborative training of public models through parameter or feature sharing,which has attracted significant attention in recent years.However,the existing FedGNN schemes do not consider the problem that the participants who jointly train the public model often have different private GNN models.That is the problem of model heterogeneity,which will lead to the failure of the existing Fed-GNN scheme in the heterogeneous model scenario.In the problem of model heterogeneity in the existing Fed-GNN scheme,this paper introduces a knowledge distillation model.It proposes a graph federated learning model based on knowledge distillation.The local private model of participant clients is trained through knowledge distillation and uses the federated learning framework to update the shared model parameters.This paper has carried out experimental verification on several different graph classification datasets to verify the scheme proposed in this paper.The experimental results show that the method proposed in this paper has an average improvement of 12.17%compared with the baseline method,thus verifying the effectiveness and advancement.Based on the above Fed-GNN scheme,this paper conducts an indepth study on the security of Fed-GNN in graph classification applications.From the perspective of untargeted attacks,this paper uses data poisoning attack and model poisoning attack methods on the Fed-GNN graph classification task.Experiments on multiple datasets show that the two poisoning attack methods can lead to different degrees of model accuracy decline.Using the model poisoning attack method can mostly lead to a 19.7%accuracy drop.Under the existing federated learning defense methods,The poisoning attack method proposed in this paper can also reduce the model accuracy by 5%-10%,thus verifying the vulnerability of the FedGNN scheme under poisoning attacks.
Keywords/Search Tags:Federated graph neural network, Model heterogeneity, Security, Poisoning attack
PDF Full Text Request
Related items