论文标题
英雄:视频+语言的分层编码器Omni-presentation预培训
HERO: Hierarchical Encoder for Video+Language Omni-representation Pre-training
论文作者
论文摘要
我们展示了英雄,这是一个大规模视频+语言全米代表学习的新颖框架。英雄在层次结构中编码多模式输入,其中视频框架的局部上下文是由跨模式变压器通过多模式融合捕获的,并且全局视频上下文由暂时变压器捕获。除了标准的蒙版语言建模(MLM)和蒙版框架建模(MFM)目标外,我们还设计了两个新的预训练任务:(i)视频 - 鞋业匹配(VSM),该模型都可以预测全球和本地时间对齐; (ii)框架顺序建模(FOM),其中模型预测了改组的视频帧的正确顺序。英雄经过有关HOWTO100M和大规模电视数据集的共同培训,以通过多个特定互动深入了解复杂的社会动态。全面的实验表明,英雄在基于文本的视频/视频刺激检索,视频问答(QA),视频和语言推理和视频字幕上的多个基于文本的视频/视频检索,视频检索(QA)上实现了新的艺术状态。我们还介绍了两个新的挑战性基准How2QA和HOW2R用于视频QA和检索,这些基准是从多种模式的各种视频内容中收集的。
We present HERO, a novel framework for large-scale video+language omni-representation learning. HERO encodes multimodal inputs in a hierarchical structure, where local context of a video frame is captured by a Cross-modal Transformer via multimodal fusion, and global video context is captured by a Temporal Transformer. In addition to standard Masked Language Modeling (MLM) and Masked Frame Modeling (MFM) objectives, we design two new pre-training tasks: (i) Video-Subtitle Matching (VSM), where the model predicts both global and local temporal alignment; and (ii) Frame Order Modeling (FOM), where the model predicts the right order of shuffled video frames. HERO is jointly trained on HowTo100M and large-scale TV datasets to gain deep understanding of complex social dynamics with multi-character interactions. Comprehensive experiments demonstrate that HERO achieves new state of the art on multiple benchmarks over Text-based Video/Video-moment Retrieval, Video Question Answering (QA), Video-and-language Inference and Video Captioning tasks across different domains. We also introduce two new challenging benchmarks How2QA and How2R for Video QA and Retrieval, collected from diverse video content over multimodalities.