论文标题

基于仿射转换的深框架预测

Affine Transformation-Based Deep Frame Prediction

论文作者

Choi, Hyomin, Bajić, Ivan V.

论文摘要

我们提出了一个神经网络模型,使用仿射转换和自适应在空间变化的过滤器中估算两个参考帧的当前帧。与现有的深度框架预测相比,估计的仿射转换允许使用较短的过滤器。预测帧用作编码当前帧的参考。由于所提出的模型在编码器和解码器都可以使用,因此无需为预测的帧进行编码或传输运动信息。通过利用扩张的卷积和缩小滤波器长度,我们的模型明显要小得多,但要比有关该主题的先前工作中的任何神经网络都要小得多,但更准确。提议的模型的两个版本 - 一个用于单向的模型,一个用于双向预测的版本 - 使用离散余弦变换(DCT)基于各种变换尺寸,多规模均值误差(MSE)损失和对象背景损失的组合使用离散余弦变换(DCT)的L1-loss组合进行训练。训练有素的型号与HEVC视频编码管道集成在一起。实验表明,所提出的模型在低延迟P,低延迟和随机访问配置中平均可为亮度组件节省约7.3%,5.4%和4.2%的位。

We propose a neural network model to estimate the current frame from two reference frames, using affine transformation and adaptive spatially-varying filters. The estimated affine transformation allows for using shorter filters compared to existing approaches for deep frame prediction. The predicted frame is used as a reference for coding the current frame. Since the proposed model is available at both encoder and decoder, there is no need to code or transmit motion information for the predicted frame. By making use of dilated convolutions and reduced filter length, our model is significantly smaller, yet more accurate, than any of the neural networks in prior works on this topic. Two versions of the proposed model - one for uni-directional, and one for bi-directional prediction - are trained using a combination of Discrete Cosine Transform (DCT)-based l1-loss with various transform sizes, multi-scale Mean Squared Error (MSE) loss, and an object context reconstruction loss. The trained models are integrated with the HEVC video coding pipeline. The experiments show that the proposed models achieve about 7.3%, 5.4%, and 4.2% bit savings for the luminance component on average in the Low delay P, Low delay, and Random access configurations, respectively.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源