论文标题

注释方式对笑声自动评估的标签质量和模型性能的影响

Impact of annotation modality on label quality and model performance in the automatic assessment of laughter in-the-wild

论文作者

Vargas-Quiros, Jose, Cabrera-Quiros, Laura, Oertel, Catharine, Hung, Hayley

论文摘要

笑声被认为是欢乐的最公开信号之一。笑声被广泛认可为一种多模式现象,但通常通过感知笑声来检测到。目前尚不清楚当通过笑声的身体运动与视频(例如视频)的其他方式注释时,对笑声的看法和注释有何不同。在本文中,我们通过询问只有音频(包含全身运动信息)或视听方式才能提供给注释者是否可以对笑声进行注释,从而朝着这个方向迈出第一步。我们询问笑声的注释是否在跨模态上是一致的,并比较标签方式对机器学习模型性能的影响。我们比较了笑声检测,强度估计和分割的注释和模型,这是先前笑研究中常见的三个任务。我们对从48个注释者获得的4000多个注释的分析揭示了笑声感知及其在模态之间的强度的证据。对综合视听参考注释的注释的进一步分析表明,与音频条件相比,视频平均召回率较低,但随着笑声样本的强度而增加。我们的机器学习实验比较了最先进的单峰(基于音频,基于视频和基于音频的加速度)的性能和多模式模型,以不同的组合,训练标签模式和测试标签模态的不同组合。带有视频和加速输入的模型无论训练标签方式如何,都具有相似的性能,这表明尽管较低的评分者一致,但使用视频获得的标签训练模型以从身体运动中进行笑声检测可能是完全合适的。

Laughter is considered one of the most overt signals of joy. Laughter is well-recognized as a multimodal phenomenon but is most commonly detected by sensing the sound of laughter. It is unclear how perception and annotation of laughter differ when annotated from other modalities like video, via the body movements of laughter. In this paper we take a first step in this direction by asking if and how well laughter can be annotated when only audio, only video (containing full body movement information) or audiovisual modalities are available to annotators. We ask whether annotations of laughter are congruent across modalities, and compare the effect that labeling modality has on machine learning model performance. We compare annotations and models for laughter detection, intensity estimation, and segmentation, three tasks common in previous studies of laughter. Our analysis of more than 4000 annotations acquired from 48 annotators revealed evidence for incongruity in the perception of laughter, and its intensity between modalities. Further analysis of annotations against consolidated audiovisual reference annotations revealed that recall was lower on average for video when compared to the audio condition, but tended to increase with the intensity of the laughter samples. Our machine learning experiments compared the performance of state-of-the-art unimodal (audio-based, video-based and acceleration-based) and multi-modal models for different combinations of input modalities, training label modality, and testing label modality. Models with video and acceleration inputs had similar performance regardless of training label modality, suggesting that it may be entirely appropriate to train models for laughter detection from body movements using video-acquired labels, despite their lower inter-rater agreement.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源