论文标题

SS-MFAR:半监督的多任务面部情感识别

SS-MFAR : Semi-supervised Multi-task Facial Affect Recognition

论文作者

Gera, Darshan, Kumar, Badveeti Naveen Siva, Kumar, Bobbili Veerendra Raj, Balasubramanian, S

论文摘要

自动情感识别在许多领域都有应用,例如教育,游戏,软件开发,汽车,医疗服务等。但是,在野外数据集上实现可观的绩效是无琐的任务。野外数据集虽然比合成数据集更好地代表了现实世界中的情况,但前者却遭受了不完整标签的问题。受到半监督学习的启发,在本文中,我们在第四届情感行为分析(ABAW)2022竞赛中向多任务学习挑战介绍了我们的提交挑战。在这项挑战中考虑的三个任务是价值(VA)估计,表达式分为6个基本(愤怒,厌恶,恐惧,幸福,悲伤,悲伤,惊喜),中立和“其他”类别和12个动作单位(AU)编号AU-我们的方法半监督的多任务面部效果识别标题为SS-MFAR使用深层残留网络,其中每个任务的特定任务分类器以及每个表达式类别的自适应阈值和不完整标签的半措辞学习。源代码可从https://github.com/1980x/abaw202​​22dmacs获得。

Automatic affect recognition has applications in many areas such as education, gaming, software development, automotives, medical care, etc. but it is non trivial task to achieve appreciable performance on in-the-wild data sets. In-the-wild data sets though represent real-world scenarios better than synthetic data sets, the former ones suffer from the problem of incomplete labels. Inspired by semi-supervised learning, in this paper, we introduce our submission to the Multi-Task-Learning Challenge at the 4th Affective Behavior Analysis in-the-wild (ABAW) 2022 Competition. The three tasks that are considered in this challenge are valence-arousal(VA) estimation, classification of expressions into 6 basic (anger, disgust, fear, happiness, sadness, surprise), neutral, and the 'other' category and 12 action units(AU) numbered AU-{1,2,4,6,7,10,12,15,23,24,25,26}. Our method Semi-supervised Multi-task Facial Affect Recognition titled SS-MFAR uses a deep residual network with task specific classifiers for each of the tasks along with adaptive thresholds for each expression class and semi-supervised learning for the incomplete labels. Source code is available at https://github.com/1980x/ABAW2022DMACS.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源