Font Size: a A A

Research On Model Privacy Attack And Defense Methods Based On Gradient Inversio

Posted on:2024-05-29Degree:MasterType:Thesis
Country:ChinaCandidate:Z F WangFull Text:PDF
GTID:2568307130972709Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
As deep learning requires massive data for model training,and has high requirements for the privacy of training data,traditional local or centralized machine learning has been unable to meet the growing training conditions.As a distributed machine learning framework suitable for multi-source data training,federated learning implements a common training model under the condition that multi-party data are not local through the mechanism of sharing parameters,which effectively protects the privacy information of local private training data.However,although federated learning does not need to share local data,it still needs to exchange model parameters,and some recent work has shown that privacy disclosure can be caused by parameters.This paper studies the gradient inversion attack and defense methods existing in the shared parameter mechanism of federated learning,and analyzes the research status of training data security and privacy protection methods in the process of federated learning.In terms of privacy disclosure attack,focusing on the improvement of gradient inversion algorithm,a gradient inversion algorithm based on Wasserstein distance is proposed,which improves the efficiency and quality of gradient inversion attack under federated learning.(1)The gradient inversion algorithm based on iterative optimization combined with the excellent properties of Wasserstein algorithm,redefined the loss number in the gradient inversion algorithm;Achieve high quality,high resolution inversion image reconstruction in image datasets.(2)A double differential privacy gradient noise perturbation method is proposed to protect the privacy of the federated learning model in order to prevent the attack of gradient inversion algorithm.(3)In order to ensure the availability of the model while adding double differential disturbance noise to protect privacy,the model is more relaxed and more accurate than-differential privacy.It is more suitable for the zero-centralized difference mechanism to calculate the cumulative privacy loss under deep learning training to disturb the parameters in the training process.
Keywords/Search Tags:Deep learning, Federated learning, Gradient inversion, Differential privacy, Wasserstein distance
PDF Full Text Request
Related items