| Abundant data resources provide a large amount of information value,and deep learning is developing rapidly based on massive data.However,while data sharing is booming,the leakage of sensitive information in data has also been rampant.There are an endless stream of privacy attacks against deep learning.Attackers use black-box or white-box inference attacks on the model to obtain the private information contained in the real training datasets.In order for the model to effectively defend against attacks and avoid privacy leakage,existing research combines differential privacy protection methods with generative adversarial networks,introducing differential privacy protection mechanisms in generative adversarial networks,and generating privacy protection sample data training models to achieve privacy assurance.However,existing research methods generally suffer from insufficient privacy asymptotic error boundaries and noise redundancy.Therefore,how to better balance the relationship between privacy and availability of synthetic data poses an important challenge to the research in this field.This paper aims to generate high-quality synthetic data samples with privacy protection properties.First of all,in order to protect the privacy of the real training set and ensure that the model has better performance while preventing privacy leakage caused by reasoning attacks,this paper proposes Efficient PATE(E-PATE),based on the idea of Private Aggregation of Teacher Ensembles(PATE).By optimizing the ensemble model and strategy,the differential privacy protection mechanism is introduced to ensure the consensus between the ensemble models,reduce unnecessary privacy budget consumption,and improve the performance of the ensemble model while reducing the progressive error boundary.Secondly,to ensure that the differential privacy protection generative adversarial networks can generate realistic synthetic sample data that without private information on the premise of defending against inference attacks.This paper further proposes Efficient PATE-WGAN-GP(EP-WGAN-GP),a generative adversarial network privacy protection method,based on differential privacy to provide valuable non-private datasets for deep learning.By introducing the differential privacy ensemble model into the generative adversarial network,based on the Rényi Accountant,the privacy overhead in the privacy protection process is tracked,and the model is trained in a privacy protection manner and realistic samples are generated.Finally,by calculating a reasonable allocation of privacy budgets,a strict privacy constraint method is used to ensure differential privacy and improve data availability,thereby ensuring the security of sensitive training sets and models.The proposed method is validated on MNIST,CIFAR10 and ISIC2019 datasets.Through sensitivity analysis,the number of teacher models and consensus threshold in teaching and research room are optimized.Based on the three datasets,E-PATE can reach 99.24%,98.99%,and 92.36% on the Area Under the ROC Curve(AUC),indicating that E-PATE has better performance while preventing privacy leakage.Experiments show that the proposed method has better performance under the same privacy constraints which compared with the state-of-the-art methods.On the three datasets,EP-WGAN-GP has improved AUC by 3.74%,5.57% and 2.29% compared with PATEGAN,which shows that the generative adversarial networks based on differential privacy proposed in this paper can better generate samples with privacy protection attributes and tend to be real. |