论文标题
通过隐式提示进行深度估算的自制联合学习框架
Self-Supervised Joint Learning Framework of Depth Estimation via Implicit Cues
论文作者
论文摘要
在自我监督的单眼估计中,深度不连续和运动对象的伪像仍然是具有挑战性的问题。现有的自我监督方法通常利用单个视图来训练深度估计网络。与静态视图相比,视频框架之间的丰富动态属性对精致的深度估计有益,尤其是对于动态对象。在这项工作中,我们提出了一个新型的自我监督联合学习框架,以使用单眼和立体视频的连续帧进行深度估算。主要思想是使用隐式深度提示提取器,该提示器利用动态和静态提示来生成有用的深度建议。这些提示可以预测可区分的运动轮廓和几何场景结构。此外,引入了一个新的高维注意模块,以提取清晰的全局转换,这有效地抑制了高维空间中局部描述符的不确定性,从而在学习框架中更加可靠地优化。实验表明,所提出的框架的表现优于Kitti和Make3D数据集上的最新框架(SOTA)。
In self-supervised monocular depth estimation, the depth discontinuity and motion objects' artifacts are still challenging problems. Existing self-supervised methods usually utilize a single view to train the depth estimation network. Compared with static views, abundant dynamic properties between video frames are beneficial to refined depth estimation, especially for dynamic objects. In this work, we propose a novel self-supervised joint learning framework for depth estimation using consecutive frames from monocular and stereo videos. The main idea is using an implicit depth cue extractor which leverages dynamic and static cues to generate useful depth proposals. These cues can predict distinguishable motion contours and geometric scene structures. Furthermore, a new high-dimensional attention module is introduced to extract clear global transformation, which effectively suppresses uncertainty of local descriptors in high-dimensional space, resulting in a more reliable optimization in learning framework. Experiments demonstrate that the proposed framework outperforms the state-of-the-art(SOTA) on KITTI and Make3D datasets.