论文标题

焦点:基于视频编码的伪造本地化框架

FOCAL: A Forgery Localization Framework based on Video Coding Self-Consistency

论文作者

Verde, Sebastiano, Bestagini, Paolo, Milani, Simone, Calvagno, Giancarlo, Tubaro, Stefano

论文摘要

如今,由于具有功能强大且用户友好的编辑软件的可用性,视频内容的伪造操作已在任何人范围内。视频的完整性验证和身份验证代表了新闻业(例如假新闻揭穿)和涉及数字证据(例如法院法院)的法律环境的主要兴趣。尽管近年来已经提出了几种策略和不同的取证痕迹,但最新的解决方案旨在通过组合多个检测器和功能来提高准确性。本文提出了一个视频伪造的本地化框架,该框架通过融合从一组独立的功能描述符中派生的信息来验证视频帧之间和内部编码迹线的自相连。特征提取步骤是通过可解释的卷积神经网络体系结构进行的,该架构专门为寻找和对编码工件进行分类。在两个典型的伪造情景中,整体框架得到了验证:时间和空间剪接。实验结果表明,在合成和现实世界中,在新近解决的空间剪接案例中,在新近解决的空间剪接案例中,最新的临时剪接定位以及有希望的性能有所改善。

Forgery operations on video contents are nowadays within the reach of anyone, thanks to the availability of powerful and user-friendly editing software. Integrity verification and authentication of videos represent a major interest in both journalism (e.g., fake news debunking) and legal environments dealing with digital evidence (e.g., a court of law). While several strategies and different forensics traces have been proposed in recent years, latest solutions aim at increasing the accuracy by combining multiple detectors and features. This paper presents a video forgery localization framework that verifies the self-consistency of coding traces between and within video frames, by fusing the information derived from a set of independent feature descriptors. The feature extraction step is carried out by means of an explainable convolutional neural network architecture, specifically designed to look for and classify coding artifacts. The overall framework was validated in two typical forgery scenarios: temporal and spatial splicing. Experimental results show an improvement to the state-of-the-art on temporal splicing localization and also promising performance in the newly tackled case of spatial splicing, on both synthetic and real-world videos.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源