Font Size: a A A

Research On Multimodal Sentiment Analysis Of Social Media Based On Deep Learning

Posted on:2024-01-10Degree:MasterType:Thesis
Country:ChinaCandidate:H Z LiFull Text:PDF
GTID:2568307100988709Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
With the rapid development of social networks,traditional social media that mainly consists of textual content is gradually transforming into a multimodal form.Using a combination of text and images to express attitudes and emotions is one of the most common ways on mainstream platforms.Analyzing emotions of content on social media not only helps with public opinion supervision and guidance but also provides decision-making support for individuals and businesses.This article focuses on the problem of "semantic gap" between different modal data in multimodal research,and combines the characteristics of social platforms to conduct multimodal sentiment analysis research based on deep learning methods for social media.The main contents are as follows:(1)Research on multimodal sentiment analysis method combined with visual attention.To address the "semantic gap" problem of image-text data at the feature level,this paper proposes to use prototypicality-based correlation analysis to establish the association between semantically related image and text data at the feature level.Observing the image-text data on social platforms,it is found that each sentence and each image tends to focus on one thing in the content posted on social platforms,and the information in the image is often related to the content mentioned in the text.The main means of conveying information is textual information,and the image plays an auxiliary role.Therefore,this paper proposes a multimodal sentiment analysis model that combines visual attention,which guides the attention of the image information to the most prominent sentence in the text.Experimental results show that the multimodal sentiment analysis model combining visual attention can effectively improve the performance of sentiment analysis,and compared with other existing sentiment classification models,it has increased by 1.25% and 1.34% in accuracy and Macro-F1,respectively.(2)Research on cross-modal sentiment analysis method based on gate mechanism.Due to the characteristics of unrestricted content,informal writing style,arbitrary tone,and divergent themes on social platforms,there is inconsistency in the expression of image-text content.This part of the data is prone to degrade into singletext modal sentiment analysis when using the multimodal sentiment analysis model with visual attention.To address this issue,this paper proposes to use cross-modal attention to perform interactive learning between modalities to correct misleading information in textual information.Then,to filter out noise and redundant information generated during the interactive learning process,this paper proposes to use a forget gate mechanism to allow the model to fully utilize the useful information for sentiment classification.Experimental results show that the model can effectively use the sentiment information of both textual and image modalities,and compared with the model without noise filtering,the performance of sentiment analysis has been improved to a certain extent,the accuracy reached 86.94%.The Macro-F1 value reached 74.91%.
Keywords/Search Tags:multimodal sentiment analysis, image-text fusion, attention mechanism, gate mechanism
PDF Full Text Request
Related items