Font Size: a A A

Member Inference Attacks For Deep Learning

Posted on:2024-05-11Degree:MasterType:Thesis
Country:ChinaCandidate:P X HuangFull Text:PDF
GTID:2558307157984449Subject:Mathematics
Abstract/Summary:PDF Full Text Request
Deep Learning is an important method in artificial intelligence,which uses deep neural networks for training and provides strong support for AI.Although Deep Learning has become one of the key factors for the success of AI,it also faces threats.Membership inference attack is an attack method that can threaten the privacy security of Deep Learning models.This article studies the attack and defense aspects,and the main research content and contributions are as follows:1.A defense method is proposed for the existing member inference attack defense methods,which have poor defense effectiveness and significant impact on the target model.This method adjusts the output of the target model by using a generation model to maintain the same distribution between training set outputs and non-training set outputs,thereby avoiding member inference attacks that steal training set privacy data through differences in distribution between these two types of outputs.Results show that with this defense method,the classification accuracy of existing attack methods is controlled at around 50%,which represents a reduction of 9.2% compared to other similar methods under identical conditions.Additionally,as member inference attacks can be considered binary classification models,this paper’s defense method can effectively prevent privacy leaks caused by models and better protect data privacy.2.A method is proposed to address the difficulty of obtaining non-target model training data for member inference attacks based on differential comparison.This method uses a generative adversarial network to generate samples and then selects the non-target model training set data based on the member inference attack of the generated model.Training sets created using this approach better match the characteristics of potential non-target model training samples in the given dataset,thereby effectively improving the accuracy of member inference attacks based on differential comparison.Experimental results on multiple datasets show that this method can increase attack accuracy by up to13.18%.
Keywords/Search Tags:Artificial Intelligence Security, Deep Learning, Keywords Privacy Protection, Membership Inference Attack, Generative Adversarial Networks
PDF Full Text Request
Related items