Font Size: a A A

Research On Adversarial Attacks And Robustness Of Vertical Federated Learning

Posted on:2024-05-07Degree:MasterType:Thesis
Country:ChinaCandidate:Y C LongFull Text:PDF
GTID:2568307067972819Subject:Computer technology
Abstract/Summary:PDF Full Text Request
As artificial intelligence continues to develop,more powerful models require large amounts of data to support training and optimization.However,legal and regulatory restrictions prevent direct data sharing,resulting in an increasingly common problem of data silos.This restricts data to specific applications or organizations,making it impossible to share and reuse,resulting in wasted resources and low efficiency.To address this problem,federated learning has emerged.Federated learning enables data privacy to remain within domains while allowing for multi-party data collaboration and modeling.However,the training process of federated learning is controlled by the data owners,making it difficult to ensure robustness and susceptible to adversarial attacks such as adversarial sample attacks and backdoor attacks.In addition,in the case of vertical distribution data,each party needs to fuse the datasets through feature selection and transformation to train high-quality models.However,the training process of vertical federated learning is still controlled by the data owners,and the heterogeneity of the data makes it even more difficult to ensure robustness.Currently,no scholars have conducted research on adversarial attacks and robustness in this area.This thesis focuses on the need and current situation,and conducts research on adversarial attacks and robustness in vertical federated learning:(1)Two adversarial attack methods for vertical federated learning were proposed.By analyzing the differences between vertical federated learning and general neural networks in-depth and targeting the characteristics of vertical federated learning,we proposed adversarial sample attack methods based on model completion and general perturbation generation.As each participating party cannot access other parties’models in the vertical federated learning scenario,this thesis proposed using a semi-supervised model completion method for shadow model training,followed by general perturbation generation to attack shadow models,using the transferability of general perturbation to attack the vertical federated learning system.Additionally,since attackers in the vertical federated learning scenario can only control their own training data and processes,we proposed a targeted backdoor attack method based on backdoor sample enhancement.This method first adds a backdoor pattern to a small number of known label samples as a backdoor trigger,then enhances the weight of the backdoor samples during the training process,enabling the vertical federated learning system to learn the backdoor pattern.Experiments showed that the above two attack methods have a very high success rate.(2)A certified robustness vertical federated learning inference framework was proposed.By studying the characteristics of adversarial perturbations and backdoor patterns in-depth,a certified robustness framework was established based on the idea of random smoothing and differential privacy technology in the7)7)norm.Under this framework,any perturbation within a certain7)7)norm ball cannot change the inference results.As both adversarial sample attacks and backdoor attacks are based on perturbations within a certain7)7)norm,this framework can provide theoretical robustness guarantees against these two types of attacks.Experiments showed that any attack perturbation within a given7)7)norm ball cannot cause the vertical federated learning system to produce erroneous inference results.
Keywords/Search Tags:Machine Learning, Privacy and Security, Vertical Federated Learning, Adversarial Attack, Robustness
PDF Full Text Request
Related items