论文标题
从面部图像中学习解开表达表示
Learning Disentangled Expression Representations from Facial Images
论文作者
论文摘要
面部图像受到许多不同变化因素的影响,尤其是在不受限制的野外场景中。对于大多数涉及此类图像的任务,例如视频流的表达识别,具有足够标记的数据的表达式昂贵。解决此类问题的一种常见策略是学习使用对抗性学习的观察到数据变化的不同因素。在本文中,我们使用对抗性损失的公式来学习面部图像的分离表示。使用的模型促进了在单任务数据集上的学习,并在不使用任何其他数据的情况下,在ActiftNetDataSet上的精度为60.53%,改善了表达式识别的最新识别。
Face images are subject to many different factors of variation, especially in unconstrained in-the-wild scenarios. For most tasks involving such images, e.g. expression recognition from video streams, having enough labeled data is prohibitively expensive. One common strategy to tackle such a problem is to learn disentangled representations for the different factors of variation of the observed data using adversarial learning. In this paper, we use a formulation of the adversarial loss to learn disentangled representations for face images. The used model facilitates learning on single-task datasets and improves the state-of-the-art in expression recognition with an accuracy of60.53%on the AffectNetdataset, without using any additional data.