| Human Activity Recognition(HAR)has been employed in a wide range of applications,e.g.self-driving cars,surveillance,medical monitoring and so on,where safety and lives are at stake.Recently,the robustness of existing HAR methods has been questioned due to their vulnerability to adversarial attacks,which causes concerns considering the scale of the implication.However,the proposed attacks require the full knowledge of the attacked recognizer,which is overly restrictive in real-world scenarios,raising the question of the significance of such threats.This thesis shows such threats are indeed real,even when the attacker only has access to the input/output of the model.To this end,this thesis proposes the very first black-box adversarial attack approach in HAR,especially in human skeletal motions.More importantly,the defense research in skeleton-based HAR domain has been largely absent so far.This thesis hence aims to fill this research gap by proposing a new defense framework.Overall,this Ph D thesis tries to understand the adversarial vulnerability of Human Activity Recognition via adversarial attacks,evaluating motion attack quality,adversarial defense,and designing a robust HAR classifier.The main work and contributions in this Ph D thesis are as follows:(1)This thesis proposes the first black-box attack method in skeleton-based HAR,called BASAR,and comprehensively evaluates the vulnerability of several state-of-the-art recognizers.BASAR explores the interplay between the classification boundary and the natural motion manifold.This is the first time data manifold is introduced in adversarial attacks on time series.The existence of on-manifold adversarial samples in motion datasets is demonstrated for the first time.Comprehensive experimental results show that on-manifold adversarial samples are truly dangerous because they are not easily identifiable under even strict perceptual studies.(2)To further understand the manifold adversarial attack for skeletal action recognition,two novel adversaries Manifold Attack and Angle Space Attack are proposed.Both manifold adversarial attacks can help mitigate the trade-off between dada manifold and classification boundary.Manifold Attack can be regarded as a plug-and-play step and it hence can be used in conjunction with other attack approaches.Angle Space Attack can make 100% on-manifold adversarial samples.(3)A new perceptual study protocol is proposed to evaluate motion attack quality,addressing that there are currently no metrics suitable for evaluating motion attack quality.This thesis designs three perceptual studies including Deceitfulness,Naturalness,and Indistinguishability.(4)A new adversarial training approach called mixed manifold-based adversarial training is proposed.Mixed manifold-based adversarial training aims to explore the interactions between on/off-manifold adversarial samples and clean samples during the adversarial training.The experimental results show that a proper mixture of on/off-manifold adversarial samples with clean samples can simultaneously improve the accuracy and robustness,as opposed to the common assumption that there is always a trade-off between them.(5)The first adversarial training method for HAR,and more broadly the first energy-based adversarial training method on time-series data is proposed.The defense research in HAR is under-explored and broadly how to model the dynamics in domain-specific tasks during defense is largely missing.This thesis proposes a new Bayesian Energy-based Adversarial Training framework that considers the fully Bayesian treatment over the data and network.This thesis further proposes a new Bayesian perspective on energy-based adversarial training,and a new post-train Bayesian strategy to keep the black-boxness of classifiers and avoid a heavy memory footprint. |