| With the rapid development of machine learning(ML)techniques,ML models are being accepted and deployed in daily lives.The security problem of ML models is thus becoming a significant topic.Adversarial examples,as a typical vulnerability for ML models,play an important role in studies for security.Adversarial examples could not only tempt models to make wrong predictions,but also could be injected into datasets,decreasing largely the performance of models.By studying adversarial examples,corresponding adversarial defenses mechanisms would be designed,and the robustness of ML models should be reconsidered.Current adversarial attack algorithms focus majorly on natural language processing(NLP),targeting specific tasks and requiring exclusive techniques in NLP,like semantic analysis.Therefore,such attack algorithms could hardly be expanded to textual data with more general expression,like texts and logs.In addition,metrics vary considerably from algorithm to algorithm.This paper elaborates mainly two researches,in order to cover the blind zone mentioned above.In the first part of work,a new algorithm for more general data form is proposed,improving the attack performance.The second part of research involves adversarial attacks and defenses on Graph Neural Networks(GNNs)in more complex applications.The theoretical content was further validated in actual attack and defense circumstances.The first part of work offers a new scheme of adversarial attack,Exploratory Character Edit Method(ECEM),which is applicable to most texts and logs.The attack could be either targeted or non-targeted,supposing that the attackers are under black-box settings,with no knowledge of model architecture,hyper-parameters and other information.ECEM generates adversarial examples with high similarity and short distance in a linear complexity and outperformed other attack algorithms in the experiment part,by analyzing results on different datasets and models.The attack success rate could reach to 100% and the data poisoning attack was later launched by injecting adversarial examples into datasets.Less than 2% adversarial logs made 475 more “attack” logs bypass the intrusion detection system(IDS).Besides,adversarial training helped defending adversarial attacks,improving ML models’ robustness.The other part casts light on adversarial texts and logs for more complex applications,in which GNNs are discussed.This paper proposed Append Attack based on topological data analysis(TDA).By preserving the original graph,adversarial nodes and connections are added in order to decrease the models’ accuracy.The algorithm includes TDA,surrogate model training,gradient descent method,transfer attack and other modules.The accuracy of victim GNN was decreased from 59% to 26% in the experiment part,thus56% drop of performance.With graph passivation,model selection,regularization and other techniques,the adversarial defense was explored,which achieved a surprising success under attack and defense circumstances.In brief,according to applications of text and log data,the research on adversarial attacks and defenses reminds once again the rumination of ML models’ robustness.One could expect that ML techniques to facilitate humans’ life and meanwhile offer secure,trustworthy and reliable services. |