| A neural network-based deep learning model relies heavily on data,which may contain sensitive information.Existing research has shown that there is a risk of leaking sensitive information from training data in deep learning models,among which membership inference attacks are representative attacks against deep learning.Differential privacy,with its theoretical provability of privacy protection effectiveness,has gradually become the primary privacy protection method in current deep learning.In the practical deployment of differential privacy models,data owners and model designers need to adjust model parameters according to the privacy leakage risk of the data to ensure the privacy protection effectiveness of the model against inference attacks on training data.Therefore,it is necessary to conduct privacy evaluation of differential privacy deep learning models in practical applications.Currently,privacy evaluation of differential privacy deep learning models still faces many challenges.Firstly,for a published differential privacy model,there is a large difference between its theoretical privacy protection effectiveness and actual protection effectiveness.Secondly,in the design process of differential models,it is necessary to measure the maximum risk of privacy leakage in the model.Finally,how to improve the usability of the model while also taking into account the privacy protection effectiveness of the model.This paper conducts research on the evaluation method of privacy protection effectiveness of differential privacy in deep learning,and demonstrates the effectiveness of the proposed solution through experiments.Specifically,the research includes the following 3 aspects:(1)A Directional Privacy Evaluation Based on Gradient Norm AnalysisTo address the issue of large discrepancy between theoretical and actual privacy protection effectiveness of differential privacy,a directional privacy protection evaluation method based on gradient model analysis is proposed.By finding the feature data with the highest privacy leakage risk,more accurate privacy evaluation results can be obtained.Specifically,this method analyzes the magnitude of the gradient of data under different labels to obtain the label with the highest privacy leakage risk as the target of membership inference attacks.Experimental results show that when the privacy budget ε is between 10.65 and 300,for the same differential privacy model,the attack payoff of the membership inference attack method in this scheme is twice that of the benchmark black-box inference attack scheme.This indicates that when facing membership inference attacks,the privacy protection effectiveness obtained by this evaluation method has more reference significance for model adjustment.(2)Privacy Risk Measurement Based on Adversarial Labeleded Samples.To address the problem of measuring the maximum privacy risk of differentially private models,a privacy risk measurement scheme based on adversarial labeled samples is proposed,which provides a basis for inference attacks by increasing the confidence difference of adversarial samples to achieve effective privacy leakage risk measurement.Specifically,the privacy risk measurement of differential privacy is equivalent to a hypothesis testing problem.A privacy risk measurement framework based on adversarial labeled samples is designed,and corresponding privacy leakage risk quantification indicators are set.Then,an adversarial sample generation algorithm with gradient clipping robustness is designed based on this privacy leakage risk measurement framework.Experimental results show that for the same differential privacy model,the evaluation results of the compared scheme indicate that the effective privacy budget for achieving effective privacy protection is at least 4,while the effective privacy budget obtained by this method is at least 2.This demonstrates that this approach can measure a greater model privacy risk and provide more effective reference for differentially private model design.(3)Differentially Private Model Feedback Optimization Based on Gradient Norm Control.To address the problem of improving model utility while ensuring privacy under the premise of privacy protection,a differential privacy model feedback optimization method based on gradient modeling control is proposed.This method adjusts model-related parameters based on privacy assessment results to improve model utility while ensuring privacy protection effects.Specifically,by designing a parameter-adjustable sigmoid activation function,the control of the gradient modulus size of the data is achieved.Based on the feedback of privacy assessment results,the size of the differential privacy noise intensity σ is adjusted to ensure privacy protection while improving model utility.The experimental results show that the accuracy of the differentially private model using the Relu activation function is 95.3%,and the privacy risk is 0.083.Under the same model architecture,the model accuracy of the proposed method is 96.8%,which has been improved by 1.5%;the privacy risk is 0.081,which is basically maintained at the same level as the benchmark.This indicates that the proposed method can effectively balance the privacy protection effect and model utility,providing a useful reference for the design of differentially private models. |