| In recent years,multi-affective joint analysis based on multi-task learning has become an important research topic in the field of natural language processing and artificial intelligence.It mainly recognizes the category of multiple affections expressed by utterances by combining multimodal information and the knowledge of related tasks.Sentiment analysis,emotion recognition and sarcasm detection are closely related tasks in the field of affective computing.This paper takes sentiment analysis,emotion recognition and sarcasm detection as the research objects,and studies these three tasks in combination with the current challenges.The specific work includes the following three aspects:(1)In view of the development of the current Chinese multi-task learning model is limited by dataset,this paper establishes a Chinese multi-task and multi-modal dialogue affective corpus to support the development of multi-task and multi-modal sentiment analysis.The dataset is simultaneously annotated with multiple task labels(such as sentiment,emotion,sarcasm and humor,etc.),and for the first time,the correlation between sentiment and emotion,sarcasm and humor has been manually marked.Scientific evaluation and analysis show that the dataset is of high quality and representative.(2)According to the constructed dataset,this paper mainly considers three aspects:context interaction,multimodal feature fusion and multi-task learning,and proposes a multimodal sentiment analysis model based on multi-task learning.The effectiveness of the model is demonstrated through experimental evaluation.(3)As the proposed model in(2)fails to take into account the interrelationships between multiple tasks,this paper proposes a multi-task learning model based on soft parameter sharing to learn the commonalities and differences between different tasks.By comparing the experimental results of other advanced baselines,it is proved that the proposed method is advanced and efficient. |