Font Size: a A A

Research On Defenses Against Poisoning Attacks In Federated Learning

Posted on:2022-03-28Degree:MasterType:Thesis
Country:ChinaCandidate:Y C TianFull Text:PDF
GTID:2568306839488514Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Federated learning is a novel privacy-preserving computation technology,which can effectively break the dilemma between privacy preservation and data utilization.However,its own security problems are also prominent while it is widely applied.Adversaries can reduce the utility of the model by poisoning attacks,which can affect the whole training process.It is with important academic and application value to research on the defenses against poisoning attacks to enhance the robustness of federated learning.However,current defenses are proposed without considering the unique features of federated learning,which makes them inapplicable and ineffective in real-world applications.Based on above observations,this dissertation studies the defenses suitable for the federated learning environment.This dissertation studies the defense method against data poisoning attack in federated learning environment.Firstly,this dissertation studies the particularity of Federated learning scenario.The finding suggests that both Fedavg algorithm and non-IID data make the defense more difficult.Based on the result,the proposed method identifies potentially malicious models via more complex mechanisms and suppresses their impact through weight assignment policies.Because the server can not perceive the attacker’s existance in real-world environment,this method can suppress the attack effect of the malicious models,and maintains a comparatively high model accuracy in the context of no adversaries or non-IID data.Additionally,this dissertation analyses a peculiar model poisoning attacks in federated learning experimentally and theoretically.The analysis reveals the effectiveness of such attacks and the difficulties to defend via existing approaches.To this end,this dissertation tries a strategy aiming to eliminate the patterns of malicious models.Specifically,two defenses—differential privacy based and selective aggregation based—are proposed.The selective aggregation method takes the symbol of the model update as the main feature to select and aggregate different local models.Experiments show that the defense based on selective aggregation can defend most model poisoning attacks,which proves its effectiveness and indicates another approach for defending model poisoning attacks.
Keywords/Search Tags:AI security, federated learning, poisoning attacks, robustness
PDF Full Text Request
Related items