| Deep neural networks have security issues in image classification applications.An attacker can generate adversarial examples by adding imperceptible perturbations to the input image to make the deep neural network misclassify it and achieve the purpose of spoofing the deep neural network.Therefore,the attack and defense techniques of adversarial examples oriented to image classification are closely related to the secure use of deep neural networks,and it is of high realistic value to study the attack of adversarial examples and improve the performance of adversarial example defense.This makes the study of adversarial example techniques oriented to image classification gradually become one of the focus of current research.Therefore,in order to explore the formation law of adversarial examples and their harm to deep learning security,and to excavation and defend the potential risks caused by adversarial examples,this thesis carried out the research on adversarial examples attacks and defense technologies for image classification.The main work is as follows:Research on adversarial example attack technology for image classification.The existing adversarial example attack methods have the problems that the attack effect,concealment and the attack transferability cannot be effectively balanced.To solve the above problems,an adaptive second-order iterative adversarial attack method is proposed,which obtains the direction of the adversarial perturbation by the pixel second-order importance calculation method combined with the momentum iteration method in multiple iterations,uses the adaptive normalization method to obtain the adaptive attack step size.Finally,the norm is used to further limit the change of the adversarial example compared with the original example to generate the adversarial example for the attack.Experiments show that the attack can be effectively attacked on multiple datasets and models,ensuring the invisibility of adversarial example perturbation and good transferability.Research on adversarial example defense technology for image classification.The existing defense methods for adversarial examples are not perfect,and the current methods suffer from poor defense generality and degradation of the original example classification accuracy.To solve the above problems,an adversarial attack defense method based on adaptive pixel denoising is proposed.The pixel importance score is obtained by the forward derivative importance calculation method,and the robustness analysis of multiple adversarial attacks is performed based on the pixel importance scores,which are classified as robust or non-robust attacks,and the noise reduction strategies are formulated for different adversarial attacks.According to the noise reduction strategy,adaptive morphological noise reduction is performed on image pixels with different importance scores to obtain pixel denoising images.Adaptive pixel denoising models are trained using information such as pixel importance scores and pixel denoised images to learn the above denoising process for adversarial defense.Experiments show that the defense can quickly and effectively defend against various adversarial attacks on multiple datasets and models,and ensure accurate classification of the original examples. |