Font Size: a A A

Music Sentiment Analysis Based On Multi-modal Information

Posted on:2021-03-03Degree:MasterType:Thesis
Country:ChinaCandidate:J J ZengFull Text:PDF
GTID:2415330626960382Subject:Computer technology
Abstract/Summary:PDF Full Text Request
With the continuous development of research in the field of sentiment analysis,music,as a common multi-modal information carrier in people's daily life,often conveys sentiment through lyrics and melody,so it is gradually included in the research category of sentiment analysis.In addition to lyrics and melody,the structure of music,such as the main song and chorus,also plays an important role in conveying the emotion of music,playing an emotional indicator role.These characteristics of music enable music to accurately capture the emotions of listeners in a short period of time,which in turn causes resonance.In order to better explore the relationship between music and emotion,this paper uses deep learning methods to develop research from two dimensions: music emotion classification and music generation based on emotion constraints:(1)Music sentiment classification fused with multi-modal information.In order to better analyze the emotions contained in music,this paper first proposes a hierarchical music analysis framework for analyzing the structure of music.Then,a novel multi-modal interaction framework is constructed.The current emotion vectors at each time step are extracted,and then the emotion vectors are fused and updated to constrain the emotions between the multimodalities to remain consistent.Finally,this paper combines the results of music structure analysis and multi-modal interaction to realize the analysis of music emotion.The experiment is carried out on the constructed music data collection,and the experimental results show that the proposed framework for fusing multimodal information has achieved good experimental results on the task of music sentiment analysis.(2)Music generation based on emotional guidance.Based on the results of music sentiment analysis,this paper further develops the task of music generation.Music-based lyrics and songs often have the characteristics of consistent emotional expression.This paper constructs a dual Seq2 Seq framework based on reinforcement learning.By introducing emotional consistency reward values and content fidelity reward values,output melody and input lyrics have consistent emotions.For the task of music generation,it is necessary not only to objectively explore the accuracy of the model,but also to combine the subjective evaluation of human beings,that is,whether the listener's emotions in the generated audio are consistent with the lyrics.The results on the constructed experimental data set show that the music generation framework proposed in this paper has achieved good results.As one of the main carriers of people's emotional expression,music contains a variety of modal information and emotional types that provide an important research foundation for the classification of musical emotions and music generation in this paper.In this paper,we carry out the classification of music sentiment fused with multi-modal information and music generation based on sentiment constraints.These works can not only provide effective technical support for practical application scenarios such as music creation and music recommendation,but also can promote the research and development of music sentiment analysis to a certain extent.
Keywords/Search Tags:Multimodality, music sentiment classification, sentiment analysis, music structure analysis, music generation
PDF Full Text Request
Related items