论文标题

GraphMft:基于图网络的多模式融合技术,用于对话中的情感识别

GraphMFT: A Graph Network based Multimodal Fusion Technique for Emotion Recognition in Conversation

论文作者

Li, Jiang, Wang, Xiaoping, Lv, Guoqing, Zeng, Zhigang

论文摘要

多模式机器学习是一个新兴的研究领域,近年来,它引起了很多学术关注。到目前为止,很少有关于对话中多模式情绪识别的研究(ERC)。由于图形神经网络(GNN)具有关系建模的强大能力,因此它们在多模式学习领域具有固有的优势。 GNNS利用了从多模式数据构建的图表来执行内模层信息相互作用,从而有效地促进了多模式数据的整合和互补。在这项工作中,我们提出了一种基于图形网络的新型多模式融合技术(GraphMFT),以在对话中识别情感。可以将多模式数据建模为图形,每个数据对象都被视为节点,并且数据对象之间存在的内部和模式间依赖性都可以视为边缘。 GraphMFT利用多个改进的图形注意力网络来捕获模式内上下文信息和模式间互补信息。此外,拟议的GraphMFT试图解决现有基于图的多模式对话情感识别模型(例如MMGCN)的挑战。两个公共多模式数据集的经验结果表明,我们的模型以67.90%和61.30%的准确性优于最先进的方法(SOTA)方法。

Multimodal machine learning is an emerging area of research, which has received a great deal of scholarly attention in recent years. Up to now, there are few studies on multimodal Emotion Recognition in Conversation (ERC). Since Graph Neural Networks (GNNs) possess the powerful capacity of relational modeling, they have an inherent advantage in the field of multimodal learning. GNNs leverage the graph constructed from multimodal data to perform intra- and inter-modal information interaction, which effectively facilitates the integration and complementation of multimodal data. In this work, we propose a novel Graph network based Multimodal Fusion Technique (GraphMFT) for emotion recognition in conversation. Multimodal data can be modeled as a graph, where each data object is regarded as a node, and both intra- and inter-modal dependencies existing between data objects can be regarded as edges. GraphMFT utilizes multiple improved graph attention networks to capture intra-modal contextual information and inter-modal complementary information. In addition, the proposed GraphMFT attempts to address the challenges of existing graph-based multimodal conversational emotion recognition models such as MMGCN. Empirical results on two public multimodal datasets reveal that our model outperforms the State-Of-The-Art (SOTA) approaches with the accuracy of 67.90% and 61.30%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源