Font Size: a A A

Research On Adversarial Training Based On Kernel Support Vector Machin

Posted on:2024-05-29Degree:MasterType:Thesis
Country:ChinaCandidate:H M WuFull Text:PDF
GTID:2568307106981929Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Adversarial attacks by generating examples which are almost indistinguishable from natural examples,pose a serious threat to many machine learning models.Under this background,effective defensive methods against adversarial attacks is a critical element for a reliable learning system.Adversarial training is one of the most stable and effective defensive methods.Although a wide range of researches have been done in recent years to improve the performance of adversarial training,but most of them are limited to deep neural networks(DNNs).Support vector machine(SVM)is a classical yet still important learning algorithm even in the current deep learning era and its robustness also needs to be explored.Since Papernot et al.[1]have proved the vulnerability of SVM to adversarial examples,this work aims to improve the adversarial robustness of kernel SVMs via adversarial training.To the best of our knowledge,this is the first work that devotes to the fast and scalable adversarial training of kernel SVM.It includes the following two parts:[1]This work proposes a fast adversarial training for kernel SVMs named ADV-SVM.It not only fills the vacant of the current adversarial training algorithms of kernel SVMs,but also solves the problem that standard adversarial training needs multi-step to construct adversarial examples iteratively,which usually results in too much running time.Specifically,this work first builds the connection of perturbations of samples between original and kernel spaces.Based on this connection,it gives a reduced and equivalent formulation of adversarial training of kernel SVM,which transforms the original minimax problem into a minimization problem.Next,doubly stochastic gradients(DSG)based on two unbiased stochastic approximations(i.e.,one is on training points and another is on random features)are applied to update the solution of the objective function.This algorithm efficiently improves the scalability of kernel SVMs when training on large-scale datasets.Finally,this paper proves that ADV-SVM optimized by DSG can converge to the optimal solution at the rate ofunder the constant and diminishing stepsizes.Comprehensive experimental results show that this adversarial training algorithm enjoys robustness against various attacks and meanwhile has the similar efficiency and scalability with the classical DSG algorithm.[2]Adversarial training algorithms improve the robustness against adversarial examples in the sacrifice of accuracy on clean data.Although several work has paid attention to this phenomenon,their proposed algorithms often require complex searching strategies to find the suitable perturbation radius.The excessive cost of time has become a problem that is hard to ignore.To solve this problem,this paper proposes a self-adaptive adversarial training algorithm for kernel SVMs named SAAT-SVM.Specifically,it proposes a novel self-adaptive adjustment framework for perturbation radius to achieve better accuracy for adversarial training.Essentially,it utilizes the self-paced regularization in self-paced learning on the perturbation radii which can conduct the self-adaption of perturbation radius for each sample.Moreover,it provides the closed-form solution of the optimal perturbation radii.This algorithm not only achieves a good balance between robustness and generalization,but also greatly reduces the time complexity of training.Extensive experimental results show that SAAT-SVM can improve adversarial robustness without compromising the natural generalization.It is also competitive with existing searching strategies in terms of running time.
Keywords/Search Tags:adversarial training, adversarial attack, kernel support vector machine, selfadaptive learning
PDF Full Text Request
Related items