Font Size: a A A

Analysis And Defense Of Bidirectional Poisoning Attack In I-SIG System

Posted on:2020-01-18Degree:MasterType:Thesis
Country:ChinaCandidate:Y X XiangFull Text:PDF
GTID:2392330575498414Subject:Information security
Abstract/Summary:PDF Full Text Request
The intelligent traffic signal system I-SIG is attracting more and more attention,the I-SIG system in practical application has achieved good demonstration effect in improving traffic efficiency,and it is entering a new round of rapid development stage of research innovation and implementation landing.However,with the emergence of research on data poisoning attacks against I-SIG system,the security of I-SIG system has come into researchers' attention.How to provide effective defensive strategy against data poisoning attacks has become an urgent problem to be solved.Due to the I-SIG system is composed of automated vehicle,intelligent signal control system and communication network that carries data and control commands,these are all likely to be malicious attacks.This paper focuses on the analysis and defense methods of poisoning attacks on automated vehicle and intelligent signal control system.On one hand,automated vehicles in the I-SIG system include many AI technologies including image recognition,speech recognition and path planning.However,recent researches have shown that the designed adversarial examples can make the image recognition system and speech recognition system produce erroneous output and may cause a dangerous collision accident,which is a big threat to the safe driving for an automated vehicle.Obviously,this kind of attack may also be introduced into the path planning application of automated vehicle.In order to prevent futural accidents,this paper studies the problem of adversarial attack and defense based on the path planning system of Q-Learning,which is a representative learning method in reinforcement learning.This paper proposes a prediction model of adversarial examples and gives the corresponding defensive techniques.The model predicts the adversarial examples based on the adversarial features and the corresponding weights.Through calculation on five adversarial features including the energy point gravitation,the key point gravitation,the path gravitation,the included angle and the placid point,a natural linear model is constructed to fit these adversarial features with the weight parameters computation based on the principal component analysis(PCA).Through massive experiments,it is verified that the proposed model can make a satisfactory prediction,the precision of the proposed model can reach to 70%with the proper parameter setting.On the other hand,the latest research on data poisoning attack of signal control system in I-SIG system makes the security problem of intelligent signal planning system obvious.The attackers send the wrong vehicle real-time speed and location data to the intelligent signal planning algorithm(COP),triggering the wrong decision of the signal planning algorithm,causing traffic congestion and even global traffic paralysis.In order to provide effective defense against the poisoning attack of the polluted data in the signal planning system,a model-independent reinforcement learning Q-Learning method is introduced to extend and reinforce COP,so as to reduce the harmful effects of the polluted data poisoning attack on COP algorithm to a certain extent.By constructing state space,discrete action space and reward function oriented to state change,the Q-Learning-COP reinforcement model is constructed.The experiment of poisoning attack on COP contaminated data is simulated and reproduced on MMITSS1,the latest open source platform of the United States Department of Transportation,and the defense experiment of data poisoning attack is carried out by using the proposed reinforcement model.The total vehicle delay after defense was reduced by about 20%compared without the defense,which shows that the model has better defensive effect.
Keywords/Search Tags:Directional Poisoning Attack, Security Defense, Reinforcement Learning, PCA-based Predictive Model, Adversarial Examples, Q-Learning-COP Reinforcement Model
PDF Full Text Request
Related items