论文标题
Point Cloud通过级联的改进网络提高采样
Point Cloud Upsampling via Cascaded Refinement Network
论文作者
论文摘要
点云上采样的重点是生成一个密集,统一和接近到表面的点集。以前的大多数方法通过仔细设计单级网络来实现这些目标,这使得生成高保真点分布仍然具有挑战性。取而代之的是,以粗到精细的方式提升点云是一个不错的解决方案。但是,现有的粗到最新提升采样方法需要额外的培训策略,这在培训过程中很复杂且耗时。在本文中,我们提出了一个简单而有效的级联改进网络,该网络由具有相同网络体系结构但实现不同目标的三代阶段组成。具体而言,前两个上采样阶段会逐渐产生密集但粗点,而最后一个改进阶段将粗点进一步调整为更好的位置。为了减轻多个阶段之间的学习冲突并减少回归新观点的难度,我们鼓励每个阶段预测有关输入形状的点偏移。以这种方式,在没有额外的学习策略的情况下,可以轻松地优化拟议的级联精炼网络。此外,我们设计了一个基于变压器的功能提取模块,以了解信息丰富的全球和本地形状上下文。在推理阶段,我们可以根据可用的计算资源动态调整模型效率和有效性。对合成和实扫描数据集的广泛实验表明,所提出的方法的表现优于现有的最新方法。
Point cloud upsampling focuses on generating a dense, uniform and proximity-to-surface point set. Most previous approaches accomplish these objectives by carefully designing a single-stage network, which makes it still challenging to generate a high-fidelity point distribution. Instead, upsampling point cloud in a coarse-to-fine manner is a decent solution. However, existing coarse-to-fine upsampling methods require extra training strategies, which are complicated and time-consuming during the training. In this paper, we propose a simple yet effective cascaded refinement network, consisting of three generation stages that have the same network architecture but achieve different objectives. Specifically, the first two upsampling stages generate the dense but coarse points progressively, while the last refinement stage further adjust the coarse points to a better position. To mitigate the learning conflicts between multiple stages and decrease the difficulty of regressing new points, we encourage each stage to predict the point offsets with respect to the input shape. In this manner, the proposed cascaded refinement network can be easily optimized without extra learning strategies. Moreover, we design a transformer-based feature extraction module to learn the informative global and local shape context. In inference phase, we can dynamically adjust the model efficiency and effectiveness, depending on the available computational resources. Extensive experiments on both synthetic and real-scanned datasets demonstrate that the proposed approach outperforms the existing state-of-the-art methods.