| Adversarial examples refer to input data that is intentionally designed to distort or mislead the output of a deep learning model.The existence of adversarial examples poses challenges to the robustness and security of the model,affecting its reliability and potential safety hazards in various scenarios.Research on adversarial examples can deepen our understanding of the working principles of deep learning models and provide theoretical foundations for improving the robustness and security of models.Image adversarial attacks can be divided into black-box attacks and white-box attacks.In black-box attack scenarios,attackers cannot directly access the internal information of the attacked model,which is more challenging compared to white-box attacks.Transfer-based attacks are a common method of black-box attacks,and this paper proposes two algorithms to improve the transferability of adversarial examples.In order to solve the problem of variance in gradient estimation during iteration generation of adversarial examples,an adversarial example generation algorithm using dual sampling to achieve variance adjustment is proposed,based on the idea of variance reduction by dual variable method,the algorithm can adjust the gradient variance by sampling two sample points in each iteration,which effectively improves the adversarial sample migration without increasing the time complexity.In order to solve the problem of gradient shattered in neural networks,an adversarial example generation algorithm based on Gaussian fuzzy is proposed.In each iteration,Gaussian fuzzy is applied to the current sample for several times,and the gradient is estimated by using the generated multiple images,which effectively improves the migration performance of adversarial samples.This paper designs a large number of comparative experiments on the ILSVRC2012 dataset,and observes the performance of the proposed algorithms in attacking regular models,defensive models,and multiple fused models.The experimental results show that the proposed algorithms can improve the transferability of generated adversarial examples and have significant advantages over previous algorithms. |