Download PDFOpen PDF in browser

Complementary Fusion of Multi-Features and Multi-Modalities in Sentiment Analysis

EasyChair Preprint no. 2151

18 pagesDate: December 12, 2019

Abstract

Sentiment analysis, mostly based on text, has been rapidly developing in the last decade and has attracted widespread attention in both academia and industry. However, the information in the real world actually comes from multiple modalities, like audio, text, image and so on. In order to make Artificial Intelligence do great progress in understanding our world, we need to be able to utilize efficiently such multimodal signals together. In this paper, based on audio and text, we consider the task of multimodal sentiment analysis and propose a novel fusion strategy including both the multi-feature fusion and the multi-modality fusion to improve the accuracy of audio-text sentiment analysis. We call it the DFF-ATMF (Deep Feature Fusion - Audio and Text Modality Fusion) model. The proposed DFF-ATMF model consists of two parallel branches, the audio modality based branch and the text modality based branch. The model's core mechanisms are the fusion of multiple feature vectors and multiple modality attention. Experiments on the CMU-MOSI dataset and the recently released CMU-MOSEI dataset, both collected from YouTube for sentiment analysis, show the very competitive results of our DFF-ATMF model. Furthermore, through the attention weight distribution heatmaps, we also demonstrate the deep features learned by using DFF-ATMF are complementary to each other and robust. Surprisingly, DFF-ATMF also achieves new state-of-the-art results on the IEMOCAP dataset, indicating that the proposed fusion strategy also has a good generalization ability for multimodal emotion recognition.

Keyphrases: Multi-Features Fusion, Multi-modality fusion, Multimodal Emotion Recognition, Multimodal Sentiment Analysis

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:2151,
  author = {Feiyang Chen and Ziqian Luo and Yanyan Xu and Dengfeng Ke},
  title = {Complementary Fusion of Multi-Features and Multi-Modalities in Sentiment Analysis},
  howpublished = {EasyChair Preprint no. 2151},

  year = {EasyChair, 2019}}
Download PDFOpen PDF in browser