| Neural network models are vulnerable to adversarial examples,which poses a significant threat to the deployment of deep learning techniques in real life.From the perspective of adversarial attacks,a deep understanding of the generation mechanism of adversarial examples is significant for improving the robustness of deep neural network models.Currently,most research focuses on 2D adversarial attacks,which directly change the image’s pixel values in the digital space.These methods are challenging to apply in the natural environment.3D adversarial attacks add perturbations to the surface texture of objects,which consider the physical properties of objects in natural environments.However,3D adversarial examples face the problems of poor attack effect to multi-view,poor transferability to black-box models,and easy failure in unfamiliar environments.This paper focuses on the transferable 3D adversarial examples generation method and studies the difficulties of multi-view robustness,black-box model transferability,and unfamiliar environment robustness in 3D adversarial attacks.The research results are as follows:(1)A multi-view robust adversarial attack method based on momentum and gradientfilter is proposed.The perturbation of 3D adversarial examples under different views will cause conflicts,and the gradient of texture images might be missing.These problems cause 3D adversarial examples to be terrible in a multi-view environment.For solving this problem,this paper introduces momentum to record the gradient of objects over different views.Then use momentum as the direction of perturbation updating to achieve the effect of a multi-view collaborative updating.In addition,some pixels of a texture image might not get gradients in the bask propagation.For the problem,gradient filtering is used to complete the gradients of pixels with gradients around them according to specific weights,which further improves the effectiveness of 3D adversarial examples in a multi-view environment.Experiments show that the proposed method is significantly better than the existing 3D adversarial attack methods in the multi-view environment.(2)A black-box transfer adversarial attack method based on a multi-class guide is proposed.The existing 3D adversarial attack methods only use the original category to guide perturbation generation,and the supervision information is unclear.Furthermore,the generated adversarial examples may easily fall into the local optimal situation,causing poor transferability in black-box adversarial attacks.This paper proposes an adversarial attack method based on a multi-class guide for the problem.By incorporating the minimum confidence class and the original class,the adversarial example can be guided away from the original class in the decision space of the model.Moreover,increasing the distance between the adversarial examples and the original image in the decision space.Experiments show that the proposed method can improve the transferability of adversarial examples to black-box models.(3)An environment-robust adversarial attack method based on simulation transformation is proposed.The existing adversarial attack methods based on data transformation directly apply to 2D images,which is not applicable in 3D adversarial attacks,resulting in poor attack effect of 3D adversarial examples in unfamiliar environments.For solving the problem,this paper applies data transformation to the surface texture of 3D objects to ensure the rationality of data transformation.In addition,this paper introduces simulated lighting in the perturbation generation stage.The 3D adversarial examples can be more robust in unfamiliar environments by introducing data transformations and simulated lighting.In addition,this paper develops a demonstration system that can display the attack effect of 3D adversarial examples in real-time. |