| The improvement in data scale and computing power has spawned a new trend in artificial intelligence research.Deep learning,as an important branch of artificial intelli-gence,has achieved impressive results in many areas such as image recognition,speech recognition,intelligent driving,medical health,etc.However,research shows that neural networks are very susceptible to adversarial attacks.A small perturbation can make the network invalid for a sample or even the entire dataset.This phenomenon severely limits the application of neural networks.In scenarios with high security requirements,the net-work must be robust enough.To ensure the security of neural networks,it is important to study the phenomenon of adversarial attacks and the generation of adversarial examples.Since its appearance,the phenomenon of adversarial attacks has been widely con-cerned,mainly focusing on adversarial attack and adversarial defense.Adversarial attack refers to generate adversarial perturbations and make the neural network misclassify.Ad-versarial defense refers to protect the network from adversarial perturbations.Researches in both areas are necessary.Studying the generation of adversarial perturbations not only helps to understand the causes of adversarial attacks,but also provides a basis for the study of defense algorithms.Researchers designing adversarial disturbances from differ-ent perspectives can help improve the robustness of neural networks.In terms of attack range,adversarial perturbations can be divided into specific adversarial perturbations and universal adversarial perturbations.This paper studies these two types of perturbations separately,and proposes two attack algorithms,one is a specific adversarial perturbation generation method that can specify the area to be perturbed,and the other is a general ad-versarial perturbation generation method based on iterative optimization.The main work of this article is as follows:·Generate specific adversarial perturbations:Specific adversarial perturbations are used for a specified sample,and the evaluation index is mainly based on their l_p norm.However,it is not enough to evaluate the visual effect of perturbations only by numerical indicators,and human visual characteristics should be further consid-ered.To this end,this paper analyzes the performance of adversarial perturbations in different regions of the image.And propose a specific perturbation generation al-gorithm to minimize the impact of perturbations on human vision.This method allows the user to specify the size and location of the perturbation,and then obtains an optimization problem based on the architecture and parameters of the network and converts it into a linear programming problem.This method can generate per-turbations with better visual deceptive effects.·Generating universal adversarial perturbations:The universal perturbation will make the neural network invalid to the entire dataset,which is more aggressive and more difficult to generate.The perturbations generated by existing methods have a large norm.This paper proposes a method based on iterative optimization,which can greatly reduce the norm of perturbations.The problem is difficult to solve due to too many variables and strict constraints.To this end,this paper first reduces the dimensionality of the perturbations,then proposes a method based on hill-climbing search to efficiently find the optimal initial solution,and finally designs an iterative optimization algorithm.The above strategies effectively solve the problem caused by the complexity of constraints,making the generated perturbations have a smaller norm and a higher fool rate.·Design experiments to prove the effect of the algorithms:In order to show the effect of the algorithm more intuitively and comprehensively,this work conducted a lot of experiments based on the two algorithms.For the specific perturbation gen-eration algorithm,Experiments show that the same perturbation added to different areas of an image has different effects on human recognition.For the universal per-turbation generation algorithm,experiments are used to verify that under the con-dition of the same folling rate,our algorithm generates a perturbation with smaller norm,and it is less likely to be recognized by human eyes. |