| With the wide application of face recognition technology in all walks of life,people’s requirements for its security are also increasing.Many studies have shown that face recognition systems based on deep learning techniques are vulnerable to adversarial examples.Attackers only need to make minor modifications to the face image to deceive the model to produce wrong recognition results,destroying the reliability and stability of the system.Studying the adversarial attack method of face recognition technology is helpful to evaluate the robustness of the model and establish an effective defense mechanism.It also has important inspiration for face privacy protection in the era of big data..As one of the important research directions in the field of adversarial attacks,universal adversarial attacks have been proved to be able to cause most samples in the data set to be misidentified by the model with only one perturbation.This attack method can effectively reduce computing costs and save attack time when attacking large-scale datasets.However,most of the existing universal adversarial attack algorithms are designed specifically for image classification tasks.When these algorithms are used to attack face recognition models,there are problems such as low attack success rate,obvious generated perturbations,and weak attack transferability.In view of this,this paper proposes two universal adversarial perturbation generation algorithms for face recognition models from the perspective of white-box attack and black-box attack.The details are as follows.(1)This paper proposes a two-stage universal adversarial perturbation generation algorithm(Two-stage Universal Adversarial Perturbation,TS-UAP)that combines spatial perturbation and additive perturbation.In the first stage,an interpolation attack is performed on the CIELAB color space of the image using a universal spatial transformation,and the perturbation level of the luminance channel is restricted to generate spatially transformed adversarial samples with high perceptual quality.In the second stage,the GAP algorithm is extended to convert the additive perturbation in the space domain to the frequency domain,and three learnable filters are introduced to adaptively adjust the perturbation values in the low,medium and high frequency bands.And improve the combination of perturbation and image,improve the success rate of additive perturbation attack and reduce its impact on image quality.Experimental results show that compared with four typical universal adversarial perturbation algorithms,the face universal adversarial samples generated by TS-UAP have different degrees of improvement in attack success rate and perceptual similarity.Furthermore,the application of TS-UAP to the image classification dataset also achieves better attack results.The validity and generality of the algorithm in this paper are proved.(2)In order to improve the attack success rate of universal adversarial perturbation in black-box scenarios,this paper proposes a class-wise universal adversarial perturbation generation algorithm(Class-wise Universal Adversarial Perturbation,CW-UAP).Specific universal perturbations are generated for each category by attacking the source category feature subspace.Different from traditional universal adversarial attack methods that only rely on additive perturbations,CW-UAP consists of image-dependent internal perturbations and imageindependent external perturbations.Among them,the external perturbation is the basic component of the total perturbation,and the internal perturbation is obtained by scaling the unit signal intensity of each pixel in the input image by a general multiplicative factor of a specific category.In addition,the algorithm further improves the attack effect of perturbation in blackbox scenarios by combining two transferability methods.Experiments have proved that compared with the existing general additive perturbation methods,the adversarial examples generated by CW-UAP have achieved the highest attack success rate on each black-box model.The superiority of the algorithm in this paper is proved. |