Font Size: a A A

Research On Sentiment Analysis Method Based On Multi-Modal Information Clustering

Posted on:2024-01-06Degree:MasterType:Thesis
Country:ChinaCandidate:X F ChenFull Text:PDF
GTID:2568307103495774Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Sentiment analysis is an important research direction in natural language processing that aims to identify and understand human emotional tendencies or attitudes through computer programs.Multimodal sentiment analysis combines multiple information modalities,such as text,video,and audio,for comprehensive sentiment prediction.The fusion of multiple modalities can effectively improve sentiment recognition rates.However,existing multimodal sentiment analysis methods use a single classification model,making it difficult to explore the underlying relationships between different modalities,thereby hindering the learning of potential feature representations between modalities.Additionally,classification models require large amounts of annotated data for training,making it more challenging and time-consuming to construct high-quality sentiment classification models.To address these issues,this work applies multimodal information clustering to sentiment analysis tasks to achieve more efficient and accurate sentiment prediction capabilities without requiring a large amount of annotated data.Based on this,we first conducted research on the multimodal information clustering algorithm and designed a multimodal clustering algorithm based on maximum mutual information.This algorithm effectively explores potential feature relationships between different modalities and supports similarity learning between modalities,thereby avoiding the inaccuracy of model prediction caused by significant differences in multiple modalities during the feature fusion process.Secondly,to address the problems faced by existing multimodal sentiment analysis tasks,we applied the above-constructed multimodal clustering algorithm to sentiment analysis tasks and constructed a self-supervised learning-based multimodal sentiment analysis model.Experimental results show that this model can improve multimodal sentiment prediction accuracy without requiring a large amount of labeled data.Additionally,we designed a multitask feature augmentation strategy that uses a multitask mechanism to jointly train the multimodal clustering model and classification model.By using the clustering task to explore the latent prior knowledge in the modality,the strategy applies it to the multimodal classification feature representation module,resulting in a positive improvement in multimodal sentiment analysis prediction.In addition to research on multimodal sentiment analysis algorithms,we further explored the effectiveness of multimodal algorithms in different fields through extensibility research on multimodal intelligent information processing.This extensibility research mainly focused on the analysis and exploration of multimodal clinical information in traditional Chinese medicine,using multiple different modalities of data such as patient pulse diagnosis signals and sublingual vein signals for single-modal analysis and multimodal fusion analysis experiments to predict the patient’s health condition.Experimental results show that multimodal prediction is significantly better than singlemodal prediction,demonstrating the effectiveness and potential applications of multimodal algorithms in different fields.
Keywords/Search Tags:Sentiment analysis, Multimodal information clustering, Self-supervised learning, Multitask learning, Feature augmentation
PDF Full Text Request
Related items