| Federated learning,a special distributed learning framework with privacy protection features,was first proposed by Google in 2016.In federated learning,the server does not have direct access to users’ data.It allows clients to train a high-accuracy global model jointly.However,existing research work in federated learning assumes that the clients participating in the training process are benign,which does not hold in real application scenarios.Clients that are controlled by attackers or experiencing device failures may send malicious model updates to the server,resulting in an unacceptable aggregated global model on the server.This procedure is called poisoning attack.Based on the adversary’s goal,poisoning attacks can be divided into untargeted poisoning attacks and targeted poisoning attacks.In this thesis,we deeply analyze and study the hazards of these two types of poisoning attacks on real datasets MNIST,Fashion-MNIST and CIFAR10,and propose new poisoning approaches based on their limitations.We also design the corresponding defense methods.The main research results are as follows:The thesis analyzes the limitations and solutions of the existing untargeted poisoning attacks,and proposes corresponding defense methods for the new poisoning attack.First,according to the different poisoning positions,the thesis divides the existing methods into data poisoning attacks and model poisoning attacks.We verify the attack effect of the existing attack method through experiments.Second,this thesis points out that existing attack methods do not consider the communication overheads in federated learning scenarios.These attack methods cannot be deployed in real-world scenarios.Following this,the thesis verifies that the existing attack methods fail to work after adding the communication compression methods through experiments.Based on this issue,this thesis proposes a new voting-based untargeted poisoning attack,which guarantees the attack effect in federated learning training with communication compression.After that,the thesis proposes a corresponding defense method for this attack method,and verifies the defense effect of the method through experiments.The thesis analyzes the limitations and solutions of the existing targeted poisoning attack methods,and proposes corresponding defense methods for the improved targeted poisoning attack.First,the thesis investigates the principle of the existing targeted attack methods.Second,the thesis points out that in federated learning scenario,clients participating in each round of training are randomly selected.If malicious clients are only selected in the early stage of training,the implanted backdoor will quickly disappear,resulting in the failure of the targeted attacks.In response to the issue,the thesis proposes a method to increase the persistence of distributed backdoor in federated learning,and verifies that the proposed method can significantly increase the persistence of backdoor.After that,the thesis proposes corresponding defense measures for this attack method,and verifies the effectiveness of the defense measures through experiments. |