论文标题

TransVCL:具有灵活监督的注意力增强视频复制本地化网络

TransVCL: Attention-enhanced Video Copy Localization Network with Flexible Supervision

论文作者

He, Sifeng, He, Yue, Lu, Minlong, Jiang, Chen, Yang, Xudong, Qian, Feng, Zhang, Xiaobo, Yang, Lei, Zhang, Jiandong

论文摘要

视频副本本地化旨在精确地将视频检索应用程序中一对未修剪视频的所有复制段定位。以前的方法通常从框架到框架相似性矩阵始于输入视频对的框架级特征之间的余弦相似性生成,然后在时间约束下检测和完善相似性矩阵上复制段的边界。在本文中,我们提出了TRANSVCL:一个引起注意的视频复制本地化网络,该网络直接从初始帧级特征和训练有素的端到端进行优化,具有三个主要组件:一种定制的变压器,用于增强功能增强,相关性和软性层,用于相似性矩阵矩阵的生成,以及用于复制sep sep sep sepment sepment sepment semgments interization interization nerigation。与要求手工制作的相似性矩阵的先前方法相反,TRANSVCL使用自我和交叉注意层在特征序列对之间结合了远程时间信息。通过三个组件的联合设计和优化,可以学习相似性矩阵以提出更具歧视性的复制模式,从而使对段标记的数据集(VCSL和VCDB)的先前方法进行了显着改进。除了完全监督的环境中的最新性能外,注意架构还促进TransVCL进一步利用未标记或简单的视频级标记数据。补充视频级别标记的数据集在内的其他实验包括SVD和FIVR,这表明TransVCl从全面监督到半佩维斯(有或没有视频级别的注释)的灵活性很高。代码可在https://github.com/transvcl/transvcl上公开获取。

Video copy localization aims to precisely localize all the copied segments within a pair of untrimmed videos in video retrieval applications. Previous methods typically start from frame-to-frame similarity matrix generated by cosine similarity between frame-level features of the input video pair, and then detect and refine the boundaries of copied segments on similarity matrix under temporal constraints. In this paper, we propose TransVCL: an attention-enhanced video copy localization network, which is optimized directly from initial frame-level features and trained end-to-end with three main components: a customized Transformer for feature enhancement, a correlation and softmax layer for similarity matrix generation, and a temporal alignment module for copied segments localization. In contrast to previous methods demanding the handcrafted similarity matrix, TransVCL incorporates long-range temporal information between feature sequence pair using self- and cross- attention layers. With the joint design and optimization of three components, the similarity matrix can be learned to present more discriminative copied patterns, leading to significant improvements over previous methods on segment-level labeled datasets (VCSL and VCDB). Besides the state-of-the-art performance in fully supervised setting, the attention architecture facilitates TransVCL to further exploit unlabeled or simply video-level labeled data. Additional experiments of supplementing video-level labeled datasets including SVD and FIVR reveal the high flexibility of TransVCL from full supervision to semi-supervision (with or without video-level annotation). Code is publicly available at https://github.com/transvcl/TransVCL.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源