论文标题

三角网络:在点云学习中实现鲁棒性

Triangle-Net: Towards Robustness in Point Cloud Learning

论文作者

Xiao, Chenxi, Wachs, Juan

论文摘要

对于许多计算机视觉系统,例如自动驾驶汽车,服务机器人和监视无人机,可以在非结构化的环境中更有效地操作,这是三维(3D)对象识别的重要功能。这些实时系统需要有效的分类方法,这些方法可对各种采样分辨率,嘈杂的测量和无约束的姿势配置具有鲁棒性。先前的研究表明,点的稀疏性,旋转和位置固有方差会导致基于点云的分类技术的性能显着下降。但是,它们俩都不足以适应多因素差异和明显的稀疏性。在这方面,我们提出了一种新的3D分类方法,可以同时实现旋转,位置移动,缩放率的不变性,并且可以稳健地稀疏。为此,我们介绍了一个利用点云的图形结构的新功能,该功能可以通过我们建议的神经网络端到端学习,以获取3D对象的强大潜在表示。我们表明,当点稀疏时,这种潜在表示可以显着提高对象分类和检索任务的性能。此外,我们表明,在ModelNet 40分类任务中,使用稀疏点云仅在任意SO(3)旋转下仅16分的稀疏点云中,我们的方法在ModelNet 40分类任务中分别超过了35.0%和28.1%。

Three dimensional (3D) object recognition is becoming a key desired capability for many computer vision systems such as autonomous vehicles, service robots and surveillance drones to operate more effectively in unstructured environments. These real-time systems require effective classification methods that are robust to various sampling resolutions, noisy measurements, and unconstrained pose configurations. Previous research has shown that points' sparsity, rotation and positional inherent variance can lead to a significant drop in the performance of point cloud based classification techniques. However, neither of them is sufficiently robust to multifactorial variance and significant sparsity. In this regard, we propose a novel approach for 3D classification that can simultaneously achieve invariance towards rotation, positional shift, scaling, and is robust to point sparsity. To this end, we introduce a new feature that utilizes graph structure of point clouds, which can be learned end-to-end with our proposed neural network to acquire a robust latent representation of the 3D object. We show that such latent representations can significantly improve the performance of object classification and retrieval tasks when points are sparse. Further, we show that our approach outperforms PointNet and 3DmFV by 35.0% and 28.1% respectively in ModelNet 40 classification tasks using sparse point clouds of only 16 points under arbitrary SO(3) rotation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源