Font Size: a A A

Research On Personalized Trigger Backdoor Attack Method Based On Audio Steganography And Attention

Posted on:2024-02-21Degree:MasterType:Thesis
Country:ChinaCandidate:S Y ZhangFull Text:PDF
GTID:2558307061992059Subject:Software engineering
Abstract/Summary:PDF Full Text Request
With the advent of the era of big data,various artificial intelligence security problems emerge in the field of network security.With the deepening of the research field and application range of deep learning,including face recognition,semantic recognition,fingerprint recognition and speech recognition applications are developing rapidly.However,in the process of training neural networks,the backdoor attack is different from the traditional attack mode,and the security and robustness of deep learning model are also challenged one after another.Attackers implant backdoors by tampering with data and modifying the model structure,and inject hidden backdoors into training samples to mislead the model to make incorrect judgments,which makes the model more likely to be attacked by backdoors,thus affecting the security of the model.The actual scenario of backdoor attack mainly faces two challenges: First,because the trigger of backdoor attack is relatively single,the defender can easily detect the backdoor trigger of different poisoning samples according to the same behavior,and the design of the trigger is not flexible enough.Second,most of the current research takes image classification as the object of backdoor attack,and there is almost no relevant research in the field of speech.Therefore,how to design a backdoor attack framework suitable for voiceprint recognition and the optimization of triggers is the biggest challenge to be solved.In this paper,personalized audio steganography and attention trigger backdoor attacks are studied in depth.According to different recognition fields,the research can ensure the attack success rate and concealability of the model,as well as the robustness of the model.The main contributions of this study are as follows:(1)Backdoor attack method based on personalized audio steganography: A method for voice print recognition and audio steganography is proposed to trigger the condition of backdoor attack.On the one hand,the attack method can hide specific information in the voice snippet and carry out specific processing on the sample.By modifying the frequency and pitch of the sample audio file without changing the structure of the attacked model,the attack behavior has stealth.At the same time,a small amount of poisoning sample training data is used,which makes the setting of trigger more covert and will not be easily found by defenders,so as to ensure the effectiveness of attack;On the other hand,this method provides a new attack direction for voiceprint recognition.The success rate of backdoor attack of the model can be increased from 63.5% to 81.4%,while the concealability of the model is not affected.In this paper,the classical backdoor attack method is used to verify the proposed detection and defense method,and the influence of various factors on the model performance is examined.The proposed method promotes the study of backdoor attack in voice print,and provides support for the study of defense methods in the future.(2)Backdoor attack method based on personalized attention mechanism: This paper proposes a model based on attention mechanism,which uses the model to inject disturbance into the backdoor as a trigger in the process of generating backdoor samples,and uses attention mechanism to improve the backdoor samples,which interferes with the original label and classification features,enhances the learning effect of triggers,further enhances the attack effect of backdoor samples,and,in a certain range,enhances the invisibility of backdoor samples.At the same time,using disturbance as trigger to carry out backdoor attack enhances the effect of backdoor attack.In this paper,the classical backdoor attack method is used to verify the proposed detection and defense methods.At the same time,a large number of real data sets are compared to verify the influence of various factors on the model performance.The proposed method promotes the research of backdoor attack in the field of image recognition,and provides support for the future research of defense methods.
Keywords/Search Tags:Backdoor attack, Artificial intelligence, Attention mechanism, Audio stealth
PDF Full Text Request
Related items