Font Size: a A A

Research On Privacy Protection And Verifiability In Federated Learning

Posted on:2024-04-22Degree:MasterType:Thesis
Country:ChinaCandidate:M L WangFull Text:PDF
GTID:2568307079960249Subject:Cyberspace security
Abstract/Summary:PDF Full Text Request
Traditional machine learning methods need to move raw data to the data center,which is not conducive to user privacy protection.Federated learning was proposed to solve data privacy issues.In the general framework of federated learning,each local device uses its own local data to train a local model,and then uploads the parameters of the trained local model to the server.The server aggregates all uploaded parameters to update the global model.In federated learning,local data is not directly transmitted to the server,which protects the privacy of local participants to a certain extent.However,there have been studies based on the attack of generative adversarial networks to extract private information from the parameters uploaded by users.And the aggregator may trick the user into accepting the model with interest preference.Participants in federated learning are not necessarily reliable.These security issues will lead to a crisis of trust among participants in federated learning,and seriously affect the application scope of federated learning.Therefore,research on privacy protection,verifiable aggregation,and Byzantine robustness in federated learning has become an urgent need.Existing research schemes cannot simultaneously guarantee security,model accuracy,and efficiency.Therefore,the research goal of the thesis is to propose a federated learning scheme that can balance security,accuracy,and efficiency.The work of the thesis is divided into the following two points:1.Aiming at the issues of privacy protection and verifiability in federated learning,and the balance of security,efficiency and model accuracy,an efficient and verifiable privacy-preserving federated learning scheme is proposed.The scheme uses hybrid encryption based on Paillier to protect the data privacy of participants,and uses linear homomorphic hash to efficiently verify correctness.Compared with existing schemes,the security guarantee of this scheme will not weaken the accuracy of the model,and the computation and communication overhead of each participant will not increase with the increase of the number of participants.Therefore,this scheme is suitable for highconcurrency federated learning application scenarios.2.Aiming at the privacy protection and verifiability issues under Byzantine attack in federated learning,an efficient and secure federated learning scheme is proposed.The scheme uses a two-server architecture to efficiently ensure privacy protection and Byzantine robustness,and uses a signature algorithm to supplement the verification algorithm in the presence of Byzantine nodes.Compared with existing schemes,this scheme hardly brings additional performance loss due to security guarantees.Therefore,this scheme is suitable for edge computing.
Keywords/Search Tags:Federated Learning, Privacy Preservation, Verifiable Aggregation, Byzantine Robustness
PDF Full Text Request
Related items