Font Size: a A A

Research And Application Of Federated Learning Privacy Preservation Method Based On Differential Privacy

Posted on:2024-01-07Degree:MasterType:Thesis
Country:ChinaCandidate:D FanFull Text:PDF
GTID:2568307085464764Subject:Master of Electronic Information (Professional Degree)
Abstract/Summary:PDF Full Text Request
With the continuous development of multiple information technologies driven by data,the issues of data security and user privacy are of great concern.Federated learning protection based on differential privacy techniques is a distributed machine learning framework with data privacy protection capabilities.The differential privacy mechanism achieves data security protection by adding distribution-compliant noise perturbations to the data,but degrades model usability.Usability is often an important measure of how good a model is,and poor model usability can limit the implementation of federated learning frameworks in practical application scenarios.Privacy protection of the federated learning architecture using differential privacy mechanisms is needed to improve the usability of the model while ensuring privacy.The main work of this paper is as follows:(1)Proposing an adaptive differential privacy federated learning protection method with differentiated noise additionTo address the problem of low accuracy of the model after adding noise perturbation to the differential privacy mechanism,a differential privacy federated learning protection method with adaptive differentiated noise addition is proposed to improve the usability of the model under the premise of satisfying privacy.The method is mainly optimized from two perspectives: cropping threshold setting and noise addition strategy.In terms of clipping threshold setting,an adaptive clipping threshold setting method is proposed: the client determines the clipping threshold of the current neural network level based on the historical gradient data information of that level during the iterative process of local model training,so as to achieve a hierarchical dynamic clipping threshold setting and make the clipping threshold more consistent with the actual data distribution,thereby improving the usability of the model.In terms of noise addition strategy,a differentiated noise perturbation method is proposed: firstly,the client designs a mechanism to quantify the importance of model parameters during the iterative process of local model training by comprehensively analyzing the relationship between gradient update size,weight parameter size and the trend of local and global updates;then,noise perturbation is added to the model parameters differently according to the importance of each model parameter.Then,noise perturbations are added to the model parameters differentiated according to the importance of each model parameter.(2)Proposing a federated learning differential privacy protection framework for power load decomposition scenariosTo address the security issues in the power load decomposition scenario,the improved federated learning differential privacy protection method is applied to a practical scenario to construct a federated learning load decomposition framework with privacy protection capability.The experimental results using public power datasets show that the framework can effectively guarantee the security in the power load decomposition scenario.
Keywords/Search Tags:Privacy protection, Differential privacy, Federated learning, Crop threshold, load decomposition
PDF Full Text Request
Related items