Font Size: a A A

Bi-LSTM Short Text Emotion Analysis Combining Semantic And Self-attention Mechanism

Posted on:2024-06-29Degree:MasterType:Thesis
Country:ChinaCandidate:Y W YueFull Text:PDF
GTID:2568307112976769Subject:Electronic information
Abstract/Summary:PDF Full Text Request
It takes a lot of manpower and low efficiency to solve the sentiment analysis task of short text manually.How to automatically analyze short text and deal with the sentiment analysis problem has become an urgent hot issue in the field of natural language processing.In this paper,a short-text sentiment analysis model integrating semantic BiLSTM-TCA is proposed,which performs well on multiple datasets.However,this single-channel model does not make sufficient use of text feature maps,and is complex in hierarchical structure,which results in low training efficiency and large computational power loss.To solve this problem,this paper designs a two-channel UNF-CA short-text sentiment analysis model.SE,SK and ECA channels attention are selected for this model.Through a series of comparative experiments,the shortcomings of the single-channel model are made up and the model has excellent performance on multiple data sets.(1)Because too large or too small input port of the model may lead to sparse text matrix or mask key semantics,the BiLSTM-TCA short text sentiment analysis model with semantic fusion selects the most appropriate model input port size according to the length distribution state of each data set through a series of comparative experiments.Besides,the special gate control mechanism of Bi-LSTM is used to solve the problem of generating sparse text matrix after some short texts.However,the new text matrix of Bi-LSTM output integrates the semantic information of the context,which is relatively more comprehensive.Subsequently,the generated feature vectors are differentiated by the self-attention mechanism,which improves the accuracy of the model.Experimental results show that the single-channel model solves the problem caused by the different length of data sets,and integrates the context semantics effectively,which improves the performance compared with the traditional deep learning model.(2)The use of a single text feature map for sentiment analysis of text is relatively limited,deepening the depth of the single channel model will bring a great burden on the computing power of the computer,and may reduce the learning efficiency and performance of the model.To solve this problem,a two-channel UNF-CA text emotion analysis model is designed,which makes full use of the feature maps generated by the two channels and combines the attention of three different channels respectively,effectively improving the sensitivity of useful channels and restraining meaningless channels.The experimental results show that,in general,the dual-channel UNF-ECA model has stronger performance than the single-channel model.In the follow-up experiments,in order to further improve the performance of the two-channel model,this paper adopted the method of superposed channel attention module and compared it with the single-channel complex model BiLSTM-TCA in Chapter 3.The experimental results showed that the accuracy of the experimental results decreased,indicating that the two-channel model was not absolutely ahead of the single-channel model in terms of performance.The complexity of the model and the efficiency of feature extraction are both important factors affecting the performance of the model.
Keywords/Search Tags:Text sentiment analysis, deep learning, bidirectional long and short term memory network, channel attention mechanism, text feature map
PDF Full Text Request
Related items