Font Size: a A A

Research On Adversarial Examples Defense Methods For Image Classification Tasks

Posted on:2023-03-28Degree:MasterType:Thesis
Country:ChinaCandidate:X W ZhangFull Text:PDF
GTID:2568306848467604Subject:Computer technology
Abstract/Summary:PDF Full Text Request
In recent years,the rapid development of deep learning has greatly improved image classification capabilities.However,relevant research has shown that even the most reliable deep learning models are still threatened by adversarial attacks.The deep learning models are guided to misclassify by adding carefully crafted adversarial perturbations to the clean examples.Based on a comprehensive analysis of relevant representative defense methods at home and abroad,we conduct in-depth research on how to better improve the defense performance of deep learning models by addressing the adversarial examples problem encountered in the development of deep learning.Firstly,to address the problem of the poor effectiveness of the existing adversarial examples defense methods for adversarial perturbations elimination,the value of the existence of adversarial perturbations in adversarial examples attacks is considered,and the starting point of the research on the use of adversarial perturbations for adversarial perturbations elimination is determined by the feasibility proof in mathematical form and the effectiveness analysis in high-dimensional feature space.Inspired by the idea of generative adversarial network,the generator architecture is used as the inverse perturbations construction model to establish the overall training framework.Combined with the classifier model to guide the inverse perturbations construction direction,so as to obtain better adversarial perturbations elimination effect.The inverse perturbations construction model is used as an additional network to assist the classifier model to complete the tasks of adversarial examples defense.Secondly,considering the problem of limited storage and computing performance in the practical deployment of lightweight networks,the research idea of combined knowledge distillation and adversarial training ideas for improving the robustness of lightweight networks is determined by analyzing the importance of knowledge distillation for robustness improvement and its practical application in adversarial training.Analyze the superiority of soft labels in the training process of deep learning models,and provide theoretical support for the research of adversarial defensive distillation method.Establish the overall framework of adversarial defensive distillation and use soft labels to globally guide the optimization of loss function to improve the robustness of lightweight networks to defend against adversarial examples.Finally,the proposed methods are implemented based on the Python programming language and the Py Torch deep learning framework,and comparative experiments are conducted on existing relevant representative methods to verify the effectiveness and advancement of the proposed methods.
Keywords/Search Tags:adversarial examples, defense methods, image classification, generative adversarial networks, knowledge distillation
PDF Full Text Request
Related items