论文标题
与在线适应的自我监督的深视觉进程
Self-Supervised Deep Visual Odometry with Online Adaptation
论文作者
论文摘要
自我监督的VO方法在共同估计视频的相机姿势和深度方面取得了巨大的成功。但是,像大多数数据驱动的方法一样,当面对与培训数据不同的场景时,现有的VO网络的性能会显着下降,这使得它们不适合实际应用。在本文中,我们提出了一种在线元学习算法,以使VO网络以一种自我监督的方式不断适应新环境。提出的方法利用卷积长的短期记忆(Convlstm)过去汇总了丰富的时空信息。该网络能够记住并从过去的经验中学习,以更好地估计并快速适应当前框架。在开放世界中运行VO时,为了应对不断变化的环境,我们通过在不同时间对齐功能分布来提出一种在线功能对齐方法。我们的VO网络能够无缝适应不同的环境。在看不见的户外场景,虚拟世界和室内环境的广泛实验表明,我们的方法始终如一地超过最先进的自我监督的VO Baselines。
Self-supervised VO methods have shown great success in jointly estimating camera pose and depth from videos. However, like most data-driven methods, existing VO networks suffer from a notable decrease in performance when confronted with scenes different from the training data, which makes them unsuitable for practical applications. In this paper, we propose an online meta-learning algorithm to enable VO networks to continuously adapt to new environments in a self-supervised manner. The proposed method utilizes convolutional long short-term memory (convLSTM) to aggregate rich spatial-temporal information in the past. The network is able to memorize and learn from its past experience for better estimation and fast adaptation to the current frame. When running VO in the open world, in order to deal with the changing environment, we propose an online feature alignment method by aligning feature distributions at different time. Our VO network is able to seamlessly adapt to different environments. Extensive experiments on unseen outdoor scenes, virtual to real world and outdoor to indoor environments demonstrate that our method consistently outperforms state-of-the-art self-supervised VO baselines considerably.