| In the modern era of parallel development of artificial intelligence and digital information technology,neural network models have shown excellent performance in recognition and classification tasks based on computer vision.Compared with traditional techniques,deep neural networks are able to extract richer features,learn them efficiently and then express them.Therefore,they widely used in many image classification fields.However,studies have shown that the classification results of deep neural networks can be outputted incorrectly by adding some kind of adversarial perturbation to the input image,a phenomenon that can indicate the vulnerability of neural networks,posing a great threat to the deployment and application of real systems.Therefore,it is crucial to study adversarial attack methods to help promote the development of security in deep learning.In this paper,we focus on the generation process of adversarial examples to improve the attack performance,and the main contributes are as follows:(1)To solve the problem of poor transferability of adversarial examples when attacking other unknown models under black-box scenario settings,this paper proposes an adversarial attack method based on data augmentation techniques.Since the input image is single,this paper uses the affine-shear transformation in data augmentation technique to enhance the input information and make the input more diverse.The Nesterov accelerated gradient-based adversarial examples generation method is used to guide the gradient descent process and further alleviate the overfitting phenomenon.Fixed angle values are used to perform the shear transform in different directions to make full use of the perturbed image pixel structure to generate the adversarial noise,which helps to enhance the transferability of the adversarial examples to attack the unknown model.Experiments are designed for single-model and ensemble-model attacks.The results fully validate the effectiveness of the proposed method.(2)To address the problems of low success rates and lack of feature information to guide the loss calculation of some current mixup attack methods,this paper proposes a mixup attack method based on feature space.In this method,the feature output of the intermediate layer is introduced as a loss guide to optimize the computational direction of the adversarial perturbation.Unlike other methods,in addition to adding other classes of image information at the input layer to complete the information enhancement process,the model output label loss and feature loss are used to jointly guide the adversarial examples away from the actual decision boundary.By comparing with other methods,the proposed method is verified to have good attack performance. |