论文标题
从在线视频中学到的开放域手语翻译
Open-Domain Sign Language Translation Learned from Online Video
论文作者
论文摘要
现有关于手语翻译的工作 - 也就是说,从手语视频转换为书面语言的句子 - 主要集中在(1)在受控环境中收集的数据或(2)特定域中的数据,这将适用性限制在现实世界中。在本文中,我们介绍了一种大规模的美国手语(ASL) - 从在线视频网站(例如YouTube)收集的英语数据集。 OpenASL在来自200多个签名者的多个域中包含288个小时的ASL视频,并且是迄今为止最大的公开ASL转换数据集。为了应对现实环境中的手语翻译的挑战,没有光泽,我们提出了一组技术,包括签名搜索,这是用于训练和融合的借口和融合媒介和握手功能的任务。所提出的技术基于先前的工作,可以在翻译质量方面进行一致和大的改进。我们的数据和代码可在https://github.com/chevaliernoir/openasl上公开获取
Existing work on sign language translation - that is, translation from sign language videos into sentences in a written language - has focused mainly on (1) data collected in a controlled environment or (2) data in a specific domain, which limits the applicability to real-world settings. In this paper, we introduce OpenASL, a large-scale American Sign Language (ASL) - English dataset collected from online video sites (e.g., YouTube). OpenASL contains 288 hours of ASL videos in multiple domains from over 200 signers and is the largest publicly available ASL translation dataset to date. To tackle the challenges of sign language translation in realistic settings and without glosses, we propose a set of techniques including sign search as a pretext task for pre-training and fusion of mouthing and handshape features. The proposed techniques produce consistent and large improvements in translation quality, over baseline models based on prior work. Our data and code are publicly available at https://github.com/chevalierNoir/OpenASL