Font Size: a A A

Research And Implementation Of Transferable Adversarial Examples Generation Method

Posted on:2024-09-15Degree:MasterType:Thesis
Country:ChinaCandidate:L H LiFull Text:PDF
GTID:2568306941984289Subject:Computer technology
Abstract/Summary:PDF Full Text Request
The emergence of adversarial examples indicates vulnerability of deep learning models and shows that deep learning model research needs to go deeper.The study of adversarial examples helps to explore how deep learning models work such as how to learn features and helps to improve the robustness of the models.A key direction of adversarial examples research is transfer-based attacks.That is adversarial examples generated against an attacking model can be transfer to other models and still be attacking to other models.The current problems of adversarial examples generation include overfitting of adversarial examples and shattered gradients.In order to alleviate the impact of the adversarial examples overfitting problem,this paper proposes the RM(Random Masking)algorithm,which performs random masking of images at multiple granularities to change the perceptual region of the model during the adversarial examples optimization,mitigate the overfitting of the model and thus improving the adversarial examples transferability.In order to avoid masking the key regions of the image,this algorithm adopts a multi-copy design to replicate multiple copies for random masking operation,which makes the model observe the image from multiple angles.For the shattered gradients problem,this paper proposes the FI(Forward Iteration)algorithm,which uses forward iterative gradient to optimize the current gradient calculation,stabilize the gradient update direction,improve the optimization efficiency,make it easier to find the global optimum point,and improve the adversarial examples transferability.Meanwhile,to avoid the gradient noise in the forward iteration process,which leads to too many forward iterations,this algorithm incorporates a multi-sampling design to further stabilize the gradient update direction.In this paper,part of the ILSVRC 2012 validation set is selected as the dataset,and seven different models are selected for verifying the effectiveness of the algorithms.The effectiveness of the proposed method in this paper is demonstrated by comparing multiple sets of experiments such as single algorithm attack,integrated algorithm attack,and integrated model attack.Based on the adversarial examples generation method,this paper designs and implements an adversarial examples generation system.The system contains various modules such as user management,dataset management,adversarial examples generation,adversarial examples management,and adversarial examples evaluation.In this paper,the overall design of the system and each functional module are described in detail,and the system is tested to prove the usability and effectiveness of the system.
Keywords/Search Tags:adversarial examples, black box attacks, transfer-based attacks, adversarial examples overfitting, shattered gradients
PDF Full Text Request
Related items