论文标题
部分可观测时空混沌系统的无模型预测
RFNet-4D++: Joint Object Reconstruction and Flow Estimation from 4D Point Clouds with Cross-Attention Spatio-Temporal Features
论文作者
论文摘要
来自3D点云的对象重建一直是计算机视觉和计算机图形学的长期研究问题,并取得了令人印象深刻的进步。但是,通常会忽略时间变化点云(又称4D点云)的重建。在本文中,我们提出了一种新的网络体系结构,即RFNET-4D ++,该架构共同重建对象及其运动流从4D点云中。关键的见解是通过从一系列云中学习空间和时间特征来同时执行这两个任务,可以利用单个任务,从而提高整体性能。为了证明这种能力,我们使用无监督的学习方法来设计一个时间矢量字段学习模块,以进行流程估计任务,这是通过监督对物体重建空间结构的监督学习来利用的。基准数据集的广泛实验和分析证实了我们方法的有效性和效率。如实验结果所示,我们的方法在流动估计和对象重建方面都可以在训练和推理中执行比现有方法快得多。我们的代码和数据可从https://github.com/hkust-vgd/rfnet-4d获得
Object reconstruction from 3D point clouds has been a long-standing research problem in computer vision and computer graphics, and achieved impressive progress. However, reconstruction from time-varying point clouds (a.k.a. 4D point clouds) is generally overlooked. In this paper, we propose a new network architecture, namely RFNet-4D++, that jointly reconstructs objects and their motion flows from 4D point clouds. The key insight is simultaneously performing both tasks via learning of spatial and temporal features from a sequence of point clouds can leverage individual tasks, leading to improved overall performance. To prove this ability, we design a temporal vector field learning module using an unsupervised learning approach for flow estimation task, leveraged by supervised learning of spatial structures for object reconstruction. Extensive experiments and analyses on benchmark datasets validated the effectiveness and efficiency of our method. As shown in experimental results, our method achieves state-of-the-art performance on both flow estimation and object reconstruction while performing much faster than existing methods in both training and inference. Our code and data are available at https://github.com/hkust-vgd/RFNet-4D