| At present,image recognition technology has become more and more mature,and has been widely used in many fields such as face recognition,image search,and driveless.However,in the process of using image recognition models to make decisions,the security of the source of training data cannot be guaranteed,which may lead to hidden safety hazards in the trained model.The attacker will maliciously attack the model by inserting some poisoned samples in the training data set.When the poisoned data is activated by the attacker,the performance of the previously well-behaved model will deteriorate.This kind of attack is called a "backdoor attack".At this stage,there is no unified security evaluation benchmark for the setting of trigger parameters in backdoor attacks.It is necessary to carry out research on the model security evaluation of backdoor attacks.This paper firstly studies the poisoning rate of the backdoor attack trigger,the influence of the number of pixels,arrangement,color and other parameters on its attack effect.By analyzing the experimental data and mining the corresponding relationship between these influencing factors and the backdoor attack effect,this paper summarizes a group of benchmarks of trigger setting parameters when backdoor attacks are performed on conventional image recognition models;these benchmarks are used to evaluate the attack effect of backdoor attacks on different image recognition models.Secondly,this paper uses this set of benchmark parameters to design a set of security evaluation framework for the neural network model used for image classification.This set of frameworks can complete the security test of the model before large-scale data training,locate the corresponding security problems,and provide targeted defense strategies,thereby effectively eliminating the security vulnerabilities and hidden dangers of the model,improving the security of the model,and avoiding the waste of computing resources caused by repeated training of large amounts of data. |