论文标题

FAF:一种新颖的多模式情感识别方法,整合面部,身体和文字

FAF: A novel multimodal emotion recognition approach integrating face, body and text

论文作者

Fang, Zhongyu, He, Aoyun, Yu, Qihui, Gao, Baopeng, Ding, Weiping, Zhang, Tong, Ma, Lei

论文摘要

多模式情绪分析在情感识别方面的表现更好,具体取决于更全面的情感线索和多模式情感数据集。在本文中,我们开发了一个大型的多模式情感数据集,称为“ HED”数据集,以促进情感识别任务,并因此提出了一种多模式情感识别方法。为了促进识别精度,使用“功能之后”框架来探索来自对齐面部,身体和文本样本的关键情感信息。我们采用各种基准来评估“ HED”数据集并将其与我们的方法进行比较。结果表明,所提出的多模式融合方法的五个分类精度约为83.75%,与单个模式相比,性能分别提高了1.83%,9.38%和21.62%。每个渠道之间的互补性有效地用于提高情绪识别的表现。我们还建立了一个多模式的在线情感预测平台,旨在为更多用户提供免费的情感预测。

Multimodal emotion analysis performed better in emotion recognition depending on more comprehensive emotional clues and multimodal emotion dataset. In this paper, we developed a large multimodal emotion dataset, named "HED" dataset, to facilitate the emotion recognition task, and accordingly propose a multimodal emotion recognition method. To promote recognition accuracy, "Feature After Feature" framework was used to explore crucial emotional information from the aligned face, body and text samples. We employ various benchmarks to evaluate the "HED" dataset and compare the performance with our method. The results show that the five classification accuracy of the proposed multimodal fusion method is about 83.75%, and the performance is improved by 1.83%, 9.38%, and 21.62% respectively compared with that of individual modalities. The complementarity between each channel is effectively used to improve the performance of emotion recognition. We had also established a multimodal online emotion prediction platform, aiming to provide free emotion prediction to more users.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源