Font Size: a A A

Research On Generating Adversarial Examples Of Deep Neural Networks

Posted on:2022-09-23Degree:MasterType:Thesis
Country:ChinaCandidate:G Z ZhangFull Text:PDF
GTID:2558307109461084Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Adversarial examples are indistinguishable from clean ones by adding small perturbations and cause machine learning models output wrong results.Deep learning models are easily at-tacked by adversarial examples.With the continuous development of deep learning,adversarial examples have become a hot research issue.Adversarial examples involve the security and ro-bustness of neural networks.It is a combination of deep learning and computer security,and it is an emerging research field.Studying the generation of adversarial examples helps us to dis-cover the blind spots of the model and detect the robustness of the network model.This article mainly focuses on the research of deep neural network adversarial examples generation method.The main work includes the following aspects:(1)The influence of adversarial examples on the deep learning image classification model is analyzed,and a method based on the GAN to generate adversarial examples in the white-box scenario is proposed.The input of the generator is the original data,and the output is the adversarial perturbation.The adversarial perturbation is added to the original data to form the adversarial example,that is,′=+.Four loss functions,GAN loss,attack classifi-cation loss,pixel loss and cycle consistent loss are used to constrain the generated adversarial examples.(2)The influence of adversarial examples on the image segmentation model based on deep learning is proved.The existing attack methods for the semantic segmentation model are all based on the gradient back propagation of the last layer of loss function to perturb.This paper proposes a method to generate a target segmentation mask based on multi-scale gradients,cal-culates the loss function of the features of multiple layers and the last layer and finds the vul-nerability of the biomedical segmentation model.Target attack loss,feature activation loss,seg-mentation binary cross entropy loss and pixel-level loss are used for the generation of target adversarial examples.(3)Finally,a large number of experiments were performed on the three deep learning clas-sification data sets of MNIST,CIFAR-10,and Image Net,which reduced the perturbation size contained in the generated adversarial examples and improved the attack success rates against defensive and non-defensive target models.In addition,comprehensive experiments on the ISIC skin lesion segmentation challenge data set and the glaucoma optic disc segmentation data set have proved that the prediction mask generated by this method has a high Io U with the target mask and high pixel accuracy,and we reduces the2 anddistance between the adversar-ial examples and the clean examples,and the perturbation required for a successful attack is reduced.
Keywords/Search Tags:Deep Neural Networks, Adversarial Example, Target Attack, Generative Adversarial Network, Multi-scale Gradients
PDF Full Text Request
Related items