| In recent years,deep neural network image classification models have been widely used in all aspects of social production and human life.However,recent studies have shown that deep neural network models are vulnerable to the attack of adversarial examples,that is,adding small perturbations to the benign examples makes deep neural network models make wrong decisions,which poses a great threat to the wide application of artificial intelligence algorithms.At present,many researchers have carried out extensive research on adversarial attack methods in an attempt to understand the decision-making process of the deep neural network models and provide ideas for the design of a more secure and robust network model.In this paper,extensive and in-depth research is carried out on the problems of high perturbations and low adversarial transferability in the existing generation methods of adversarial examples.Specific research contents are as follows.First,the relevant knowledge in the field of adversarial attack and defense is introduced,and some theories and methods involved in this research are briefly introduced.Second,in view of the problem of low adversarial transferability that is common to adversarial examples generated by existing methods,a method of improving the adversarial transferability of the adversarial examples is explored.This paper analyzes the relationship between the training process of deep neural network models and the generation process of adversarial examples,integrates the belief optimization idea with good generalization characteristics,removes the constraint of the symbol function in the fast gradient symbol method,constructs a belief-based iterative fast gradient method,realizes the improvement of adversarial transferability.In view of the global adversarial perturbations problem of adversarial examples generated by existing methods,the method of reducing the inefficient perturbations is explored.This paper analyzes the decision-making characteristics of deep neural network models and locate the salient regions of deep neural network models about benign examples,and a perturbation restriction strategy based on the salient region is designed to improve the image quality and visual effect of adversarial examples.Thirdly,in view of the poor performance of the existing adversarial attack methods in the advanced adversarial defense methods,and the starting point is to further improving the adversarial transferability of adversarial examples,a method that is threatening the advanced adversarial defense methods is explored.This paper analyzes the problems existing in the iterative process of the iterative adversarial example generation method,evaluates the influence of gradient estimation variance based on the idea of variance reduction,constructs a variance-tunning-based iterative fast gradient method to realize the improvement of adversarial transferability.On this basis,integrating the constructed beliefbased iterative fast gradient method and the designed perturbation restriction strategy with the method,a combination method is proposed to achieve the high adversarial transferability,low perturbations and the threat to advanced defense models.Finally,ImageNet dataset is used to verify the feasibility and effectiveness of the proposed methods.The proposed methods are compared with the existing representative adversarial attack methods,and further verified on the advanced defense methods. |