| Although there are many uses for deep neural network learning in the field of image identification,their susceptibility to adversarial sample assaults has prompted a much-needed area of study in this subject.Deep neural network adversarial defense research is now facing obstacles including robustness and generalization issues.As a result,this paper will conduct a thorough analysis of adversarial learning’s robustness and generalization capabilities.The primary achievements comprise:This study investigates the connection between decision boundary distance and the size of the training sample set and the robustness of neural networks based on decision boundary theory.It statistically evaluates the relationship between the two as well as the important dataset properties.According to the supplemental experimental study,the robustness of the neural network grows with increasing decision boundary distance,and the adversarial learning of the neural network may achieve stronger generalization effects with bigger training sample sets.A robustness-enhancing confidence-based model.The method takes advantage of the fact that training samples with different confidence levels increase the model’s robustness to different degrees and can increase robustness while using the same number of training samples by screening samples with low confidence levels prior to training and combining them with various percentages of adversarial examples.The experimental results demonstrate the method’s ability to combine sophisticated adversarial training techniques from recent years,such as PGD-AT,AWP,Calibrated AT,and TRADES,and demonstrate at least a 0.59%improvement in adversarial robustness on the MNIST dataset based on these techniques and at least a 1.87%improvement on the SVHN dataset.This experimental finding supports the present method’s ability to increase robustness.A technique for improving model generalization based on sample masking.The technique takes into account the most extreme instances of adversarial examples generation,enhances the neural network’s robustness by applying a reasonable loss function to instances of incorrectly labeled adversarial examples during training,and enhances the neural network’s generalization performance by using sample random masking,thereby preventing the neural network from becoming overfit to a single pixel.According to the experimental findings,the adversarial generalisation error on the MNIST dataset and the CIFAR-10 dataset may be decreased to 1.91%and 5.05%,respectively.The results of the experiments demonstrate that this technique can significantly enhance the neural network’s ability to generalize against adversarial examples. |