论文标题

SpeakeFaces:带有视觉和热视频流的语音命令的大规模多模式数据集

SpeakingFaces: A Large-Scale Multimodal Dataset of Voice Commands with Visual and Thermal Video Streams

论文作者

Abdrakhmanova, Madina, Kuzdeuov, Askat, Jarju, Sheikh, Khassanov, Yerbolat, Lewis, Michael, Varol, Huseyin Atakan

论文摘要

我们将SpeageFaces作为一种公共可用的大规模多模式数据集,旨在在利用热,视觉和音频数据流组合的上下文中支持机器学习研究;例子包括人类计算机的相互作用,生物识别认证,识别系统,域转移和语音识别。 SpeakeFaces由完全构造的面孔的高分辨率热和视觉光谱图像流与每个主题的音频记录同步,每个受试者的录音大约有100个命令式短语。从142名受试者收集数据,得出13,000多个同步数据实例(〜3.8 TB)。对于技术验证,我们演示了两个基线示例。第一个基线显示了通过性别的分类,利用了干净和嘈杂环境中三个数据流的不同组合。第二个示例由热到视觉面部图像翻译组成,作为域转移的实例。

We present SpeakingFaces as a publicly-available large-scale multimodal dataset developed to support machine learning research in contexts that utilize a combination of thermal, visual, and audio data streams; examples include human-computer interaction, biometric authentication, recognition systems, domain transfer, and speech recognition. SpeakingFaces is comprised of aligned high-resolution thermal and visual spectra image streams of fully-framed faces synchronized with audio recordings of each subject speaking approximately 100 imperative phrases. Data were collected from 142 subjects, yielding over 13,000 instances of synchronized data (~3.8 TB). For technical validation, we demonstrate two baseline examples. The first baseline shows classification by gender, utilizing different combinations of the three data streams in both clean and noisy environments. The second example consists of thermal-to-visual facial image translation, as an instance of domain transfer.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源