Font Size: a A A

Research On Sentiment Analysis Algorithm Based On Different Modes

Posted on:2024-06-14Degree:MasterType:Thesis
Country:ChinaCandidate:X WangFull Text:PDF
GTID:2568307127954189Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Due to technological progress and the prevalence of the internet and smartphones,individuals are now able to express their emotions through various and diverse means on the internet.Exponential increasing amounts of data of different modes,such as text,voice and video,are generated on the Internet every day.These data contain a wealth of emotional information,but have different structures.An analysis of emotions based on different modes can facilitate people in comprehending and exploring emotional information.This paper’s main research contents and innovative work are as follows:Aiming at the problems of text irregularity and Chinese complexity existing in sentiment analysis of Chinese microblogs,this paper proposes a method for text sentiment analysis based on multi-head attention and multi-scale CNN.By combining the advantages of CNN and RNN,we can understand the emotion features from different perspectives and enhance the emotion recognition ability of the model.Firstly,special formats and nonsensical characters are removed from the microblog.And then the BERT pre-training model is used to generate dynamic word vectors for the text.Secondly,utilizing a global feature extraction layer that utilizes BILSTM and multi-head attention mechanism,we are able to capture the dependencies of text context,obtaining comprehensive global features and enhancing overall comprehension of context.At the same time,the local feature extraction layer based on multi-scale CNN is used to effectively excavate the local features of the information containing words of different lengths in the text.Finally,the global feature and local feature are combined to predict the emotion.Experiments conducted on SMP2020-EWECT and Weibo_senti_100k demonstrate that this method significantly enhances the overall performance of the model in text sentiment analysis.To address the existing problems of feature selection and speech text relationship processing in speech sentiment analysis,this paper proposes a speech sentiment analysis model based on multi-task learning.Firstly,the audio waveform is encoded by Wav2Vec2.0pre-training model to obtain the time sequence feature representation of speech.Secondly,based on the idea of multi-task learning,in the training stage,the time sequence features of speech are represented in the speech translation task and the speech emotion classification task.Finally,speech emotion analysis prediction was performed in the verification phase.The accuracy rate reached 72.8% on the IEMOCAP dataset,and the ablation experiment verified the effectiveness of the multi-task learning framework.Focusing on the limitations of single-modal sentiment analysis compared with multi-modal sentiment analysis and the problems of insufficient information interaction and unbalanced optimization of data of different modes existing in multi-modal sentiment analysis,this paper proposes a multi-modal sentiment analysis model based on dynamic gradient mechanism and multi-view co-attention mechanism.Firstly,the dynamic gradient mechanism is proposed.By monitoring the difference of contribution degree and learning speed of each mode optimization process in model training,the convergence speed of each mode is dynamically controlled,so that the features in each mode can be fully learned.Secondly,the features of each mode learned are fused through multi-view co-attention,and different views are constructed between each two modes for two-way interaction.The long-distance context information of each two modes is learned,which enhances the interactive performance between the modes.Finally,the emotion prediction is carried out by combining the feature fusion with the single-modal feature.Experiments on CMU-MOSI and CMU-MOSEI data sets show that this method can fully learn information between single mode and different modes,and effectively improve the accuracy of multi-modal sentiment analysis.To summarize,this paper delves into the task of text sentiment analysis task,voice sentiment analysis task and multimodal sentiment analysis task,considering the existing issues in the field,and conducts thorough research and exploration to address them.So as to provide a more comprehensive perspective for emotion analysis research and further improve the recognition effect of emotion analysis.
Keywords/Search Tags:Sentiment Analysis, Attention Mechanism, Multi-modal Feature, Deep Learning, Natural Language Processing
PDF Full Text Request
Related items