Font Size: a A A

Video Emotion Classification Based On EEG Features And Audio-visual Features

Posted on:2017-07-19Degree:MasterType:Thesis
Country:ChinaCandidate:H GeFull Text:PDF
GTID:2334330503992876Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
With the increasing number of network video, network video containing bad information is also increasing. In order to protect the minors and some special groups, the network video classification does not permit of any delay. As an important foundation of bad information filter, video emotion classification has been a hot research topic. In recent years, with the emergence of portable EEG signal acquisition devices and the increasing demand for personalized video analysis, EEG signals in the field of video emotion classification has attracted more and more attention of researchers. In order to improve the accuracy of video emotion classification, and to explore the classification model construction method for the effective fusion of video signal and EEG signal, the specific research work is as follows:First of all, this paper reviews the research status of video emotion classification and multimodal fusion classification. Through the in-depth study of the emotion classification of multimodal fusion video, this paper analyzes the key points of the construction of multimodal fusion video emotion classification model: feature extraction, feature selection and fusion classification. This paper only focuses on the two key points of feature selection and fusion classification.Secondly, this paper proposes EEG feature selection method based on decision tree. This method pays attention to the key research points of feature selection, and explains the specific ways of applying decision tree technology to the feature selection of EEG signal. In the BCI II competition standard data set Ia, this paper used different EEG feature selection methods to test. Experimental results show that compared with representative traditional EEG feature selection methods, the proposed method is consistent with physiology research conclusion and obtained higher classification accuracy rate.Finally, this paper proposes multimodal fusion using kernel-Based ELM for video emotion classification. This method is mainly focused on the key point of fusion classification. Based on kernel-Based ELM, the proposed method made EEG features and video features fuse, and mapped to the emotion categories space, and provides new ideas for hybrid video emotion classification. This paper designed a video emotion induced EEG experiment, and obtained the required video signal and EEG signal. Firstly, this paper uses the data set of EEG signal to verify the validity for video emotion classification. Then based on the EEG signals and videos collected, different video emotion classification methods were tested. The results show that compared with only using direct video emotion classification method of video signals and only using EEG video implicit emotion classification method, the proposed multimodal fusion video emotion classification method can achieve higher classification accuracy rate.
Keywords/Search Tags:video emotion classification, EEG, multimodal fusion, decision tree, kernel-based ELM
PDF Full Text Request
Related items