Font Size: a A A

Adversarial Research On Malware Visual Detection Method

Posted on:2024-07-23Degree:MasterType:Thesis
Country:ChinaCandidate:K HuangFull Text:PDF
GTID:2568307073950239Subject:Cyberspace security
Abstract/Summary:PDF Full Text Request
In recent years,with the increasing threat of malware as the main security issue in the current cyberspace,traditional static detection methods require reverse engineering,leading to low efficiency in identifying malicious code.Although dynamic detection methods can detect new malware in a timely manner and update detection models quickly,they consume large resources and have high detection costs,which is not conducive to large-scale malware detection.Machine learning and deep learning methods are widely used in malware detection,which can automatically detect new malware and update models in real-time,but there are also problems such as tedious feature extraction and poor robustness,resulting in low detection accuracy and efficiency.Therefore,deep learning-based malware visualization detection models have been widely used.The original size of malware varies,and traditional visualization methods mostly convert binary files into grayscale images,resulting in inconsistent image sizes.In addition,scaling and cropping the images may cause some information loss,which affects the detection efficiency and accuracy of the method.It has been shown that in the field of image classification,deep learning models are susceptible to adversarial attacks,which can cause the model to output incorrect results.Attackers use adversarial attack algorithms to create perturbations and overlay them on the original input samples.These perturbed samples can deceive machine learning models,causing misclassification and high confidence.Therefore,in the malware visualization detection method,there may exist adversarial attacks in theory.This paper investigates the problems in malware visualization detection and makes innovative contributions in the following two aspects:(1)A new method for malware visualization detection is proposed.This method uses Markov models to convert the byte sequences of the original sample into three-channel color images,thereby obtaining visual features that can better distinguish different malware.In addition,Conv Ne Xt-T classification models are used to classify the extracted features,and transfer learning fine-tuning techniques are combined to successfully apply ImageNet classification weights to the training of the Conv Ne Xt-T model,accelerating the convergence speed of the model and shortening the training time.The experimental results show that the detection accuracy of our proposed method on Kaggle2015 dataset and Datacon2020 dataset is 98.74% and 99.72%,respectively,which is better than ResNet34,MobileNet V2 and other common models.Compared with existing literature,the proposed method has better comprehensive detection performance,further demonstrating its effectiveness.(2)The high similarity between malware visualization detection and image classification fields is explored,and the threat of adversarial attacks in the malware visualization method is pointed out.In order to verify this threat,four attack algorithms are introduced and reasonable attack thresholds are set.The attack experiments show that whether it is single-step FGSM attack,powerful iterative PGD attack,BIM attack or optimization-based CW attack,they can pose a great threat to the deep learning model-based malware visualization detection method.When the strength of the adversarial attack perturbation reaches 80/255,the classification model will lose its ability to resist the attack.Even slight image perturbations can cause significant feature bias,making convolutional neural networks unable to correctly classify adversarial malware samples.This proves that the PGD-def capability of the malware visualization method against adversarial samples is weak and there is a threat of adversarial attacks.In order to minimize the impact of adversarial attacks,a hybrid adversarial training method is proposed to enhance the robustness of the classification model by utilizing the diversity of adversarial malware samples.The PGD-def experimental results show that the classification model can learn the features of adversarial malware samples and reduce the misclassification rate of attack samples through these features,making the proposed visualization method and classification model have high detection accuracy and strong robustness.
Keywords/Search Tags:Malware, Deep learning, Adversarial examples, Markov model
PDF Full Text Request
Related items