Font Size: a A A

Research On Federated Learning Methods Based On Local Differential Privacy

Posted on:2024-03-04Degree:MasterType:Thesis
Country:ChinaCandidate:C LiuFull Text:PDF
GTID:2568307130973929Subject:Software engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of big data and the Internet of Things(Io T),highly sensitive and valuable data is growing at an unprecedented scale.By effectively analyzing this data using artificial intelligence algorithms,users and businesses can benefit.However,traditional artificial intelligence algorithms require the acquisition of scattered user and enterprise data for joint training,and the privacy security of data transmission and model training cannot be fully ensured,resulting in artificial intelligence algorithms falling into a data island dilemma.To address this problem,local differential privacy federated learning(LDP-FL)has emerged,which not only prevents the leakage of real data but also prevents adversaries from inferring personal information through model inversion,providing dual security for data privacy.However,the current training of LDP-FL still centers on maximizing the mining of data value,often ignoring users’ demands for personalized privacy protection.At the same time,its training method is relatively simple,considering only the direct integration of two technical solutions of local differential privacy and federated learning,ignoring the significant impact of LDP’s direct disturbance of highdimensional parameters on FL model training efficiency.The details are divided into the following parts:(1)Federated learning model based on personalized local differential privacy.In order to meet the privacy protection needs of different users,this thesis proposes a personalized local differential privacy federated learning model based on the exponential distribution and maximum likelihood estimation.Firstly,a privacy budget quantification scheme is established based on the exponential distribution,taking into account the proportions of users with different privacy requirements.This quantifies the privacy demands into specific privacy budgets.Secondly,considering that users choosing different privacy protection levels may result in varying degrees of perturbation to local parameters,this paper uses the maximum likelihood estimation method to construct an unbiased and low-variance mean estimator for accurately aggregating local model parameters under multiple privacy scenarios.Finally,extensive simulation experiments are conducted on the MNIST and Fashion-MNIST datasets to validate the proposed personalized local differential privacy federated learning model.The results demonstrate that the model can achieve decent model accuracy while satisfying various privacy protection requirements of users.(2)Adaptive federated learning with differential privacy protection by incorporating random perturbation.To mitigate the impact of high-dimensional model parameter perturbations on model accuracy,this thesis proposes a layered dimension selection strategy based on local differential privacy mechanisms.This strategy randomly perturbs certain gradients while leaving the others unchanged,protecting sensitive information while minimizing the impact on the model’s accuracy.Furthermore,to further improve model training accuracy,two dynamic privacy budget allocation methods are proposed.The first approach adjusts the privacy budget in real-time based on changes in the training model’s accuracy.The second approach achieves linear growth of the privacy budget by predefining a polynomial linear growth function,avoiding the extra comparison operation.Finally,simulations on the MNIST and Fashion-MNIST datasets demonstrate that our method can effectively improve model training efficiency.
Keywords/Search Tags:Federated learning, Local differential privacy, Personalized privacy protection requirements, Dynamic privacy budget allocation, Layered dimension selection
PDF Full Text Request
Related items