Font Size: a A A

Research On Performance And Implementation Of Attack And Defense Of Adversarial Examples For Classification And Object Detection Models

Posted on:2024-09-18Degree:MasterType:Thesis
Country:ChinaCandidate:Z ZhouFull Text:PDF
GTID:2568306920480074Subject:Electronic information
Abstract/Summary:PDF Full Text Request
Deep Neural Network(DNN)has been widely applied to the manifold tasks of image,speech,and natural language processing(NLP)and achieved excellent performance.However,the low security and low robustness of DNN are tough problems that are required to solve promptly.With the deep integration of DNN and tasks of image,speech,and NLP,the problems will be more prominent.DNN is vulnerable to a small disturbance that is imperceptible to the human eyes,which is the Adversarial Example(AE).It is easy for AE to mislead DNN to incorrect judgment,and then lead to very serious consequences.Therefore,only by further exploring the fundamental characteristics and attack performance of AE,DNN can effectively avoid the attack by AE to improve the security and robustness of the DNN.In this dissertation,to enhance DNN’s defensive capability against AE,we do research on the AE’s performance of classification models and propose an attack and defensive algorithm for object detection models,and implement an attack and defense system on the object detection models.The specific work and main achievements are as follows:(1)We study the robustness of DNN for AE.Aiming at the problem of low robustness and security of DNN against AE,we do research on the attack performance and robustness of AE from three aspects of the neural network model including complexity,activation function,and loss function.Through comparing and analyzing on the numerous experiment results,we obtain the conclusion.Simpler neural network architecture is more fragile to the attack of AE.Models with diverse activation functions show diverse sensitivity when facing the attack of AE.The model with the Sigmoid activation function is more vulnerable and the model with the ReLU6 activation function is more robust.Models with diverse loss functions reveal diverse robustness to the attack of AE when facing the attack of AE.The model with cross-entropy loss function is more robust than the model with Focal loss function.(2)We propose an adversarial patch attack algorithm for the object detection models of neural networks(More Vivid Patch,MVPatch).Aiming at the issues of easy identification and poor transferability of the traditional adversarial patches,the dissertation introduces the ensemble attack and compared specified image similarity measurement loss function to the adversarial patch attack algorithm and designs a more natural and transferable adversarial patch.Through comparing and analyzing on the large number of experiment results,we obtain the conclusion.The transferable attack performance of meaningful adversarial patches generated by MVPatch is about 12%~16%higher than that of similar algorithms,and the patches generated by MVPatch have more naturalness.The transferable attack performance of meaningless adversarial patches generated by MVPatch is about 10%~20%higher than that of similar algorithms.It reveals that the proposed MVPatch attack algorithm has superiority in the aspect of invisibility and transferability.(3)We propose a defensive algorithm(Defend Patch,DePatch),against the adversarial patch attack for object detection models of neural networks and implement an attack and defense system.The adversarial patch attacks are to emerge endlessly and their performance is gradually improving.It has become an urgent problem to be solved,that how to detect adversarial patches.In this dissertation,we design a defensive algorithm DePatch,against adversarial patches by special color characteristics of the adversarial patch,the sensitivity of noise,and the diverse detection consequence from attacked images and benign images.In addition,we improve the robustness of DePatch against the adversarial attack by utilizing the research consequence of robustness of AE.Through doing lots of experiment results comparing analysis,we obtain the conclusion.When facing a meaningful adversarial patch.attack,the DePatch can make the transferable attack performance reduce by about 20%~30%.When facing a meaningless adversarial patch attack,the DePatch can make the transferable attack performance reduce by more than 10%.It is proved that the proposed DePatch defensive algorithm has remarkable defensive performance against diverse kinds of adversarial patch attack algorithms.Meanwhile,we design an adversarial patch attack and defense system for the object detection models by combining the research results of the adversarial patch attack algorithm MVPatch and the adversarial patch defensive algorithm DePatch.The system can detect whether there is an adversarial patch covering the input image.If some adversarial patches are covering the input image,the defense system will automatically apply noise to the adversarial patch,to degenerate the attack performance of the adversarial patch and alert the user to notice the adversarial patch attack.With these measures,we can detect and prevent the adversarial patches successfully and in the meantime,we can also add the adversarial patch to the image to obtain the attacked image by this system.
Keywords/Search Tags:Adversarial Example, Neural Network, Adversarial Attack and Defense, Adversarial Patch, Adversarial Detection
PDF Full Text Request
Related items