Font Size: a A A

A Membership Inference Attack Defense Method For Image Classification Models

Posted on:2024-06-08Degree:MasterType:Thesis
Country:ChinaCandidate:H K FangFull Text:PDF
GTID:2568307067472474Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Image classification models based on deep neural networks have been widely used in various fields of computer vision,such as face recognition,object recognition,scene recognition,etc.,and have made remarkable progress.However,the wide application of image classification models has also brought about security issues such as data privacy leakage and reliability problems.In recent years,the issue of data privacy has attracted much attention.As an emerging privacy attack method against machine learning models,member inference attacks can infer whether a data is involved in model training by the output of the model,which leads to privacy leakage of training data.Member inference attacks pose a serious threat to the privacy security of data,and studying the defense problem of member inference attacks can help promote the sustainable,secure,and robust development of AI technologies.There have been many studies on defense methods for member inference attacks,but there are still some shortcomings.For example,existing defense methods are difficult to strike a balance between effectiveness and usability,i.e.,defense methods either cannot reduce the attack accuracy to the random guessing level or will degrade the performance of the model,both of which are difficult to balance.In this thesis,we propose an efficient and practical defense method from the perspective of data augmentation to address the difficulty of balancing effectiveness and usability in member inference defense methods.The main work of this thesis is as follows:(1)Taking the image classification model as the research object,we propose an Adaptive Data Augmentation defense method from the perspective of data augmentation,which can balance the defense effectiveness and usability.Adaptive Data Dugmentation consists of two modules: Random Mixup Augmentation and Adaptive Reserve Augmentation.The Random Mixup Augmentation technique mixes the randomly enhanced data features and the corresponding original data labels to generate new data instead of the original data for model training,which can smooth the model decision boundary and significantly reduce the overfitting of the model,narrow the difference between the model performance on training and nontraining data,and guarantee the defense effectiveness of the method.The Adaptive Reserve Augmentation technique enhances the data by retaining the key regions of the original data to improve the fidelity of the new data.And further,it changes the coordinate position of the region by Adaptive Affine Transformation to avoid the problem of privacy leakage caused by overlapping the key region of the original data and the key region of the new data,which guarantees the defensive availability of the method and enhances the defensive effectiveness of the method at the same time.(2)Using various standard image datasets and image classification models,we simulate multiple attackers with different knowledge backgrounds to launch multiple membership inference attacks to experimentally test the effectiveness and usability of this defense method,and compare this defense method with various mainstream defense methods.The experimental results show that the defense method in this thesis can effectively resist multiple membership inference attack methods launched by each attacker,and reduce the accuracy of the attacker to about 50% of the random guessing level.At the same time,the defense method in this thesis does not compromise the accuracy of the model.Compared with other state-of-the-art defense methods,the defense method in this thesis not only has excellent defense capability but also performs well in terms of defense availability.
Keywords/Search Tags:Deep Learning, Membership Inference Attack, Data Augmentation, Image Classification
PDF Full Text Request
Related items