论文标题

颞叶:有效的时空立体声匹配网络

TemporalStereo: Efficient Spatial-Temporal Stereo Matching Network

论文作者

Zhang, Youmin, Poggi, Matteo, Mattoccia, Stefano

论文摘要

我们提出了terunalalStereo,这是一种高效的粗到精细的立体声匹配网络,能够有效利用过去的几何形状和上下文信息以提高匹配精度。我们的网络利用了稀疏的成本量,并在给出单个立体声对时被证明是有效的。但是,其在立体声序列上使用时空信息的特殊能力使颞叶可以减轻诸如遮挡和反射区域之类的问题,同时在后一种情况下也享有高效率。值得注意的是,我们的模型(通过立体声视频训练了一次)可以无缝地以单对和时间模式运行。实验表明,在视频上运行时,我们依赖相机运动的网络甚至对动态对象也是可靠的。我们通过有关合成(场景流,塔塔尔)和Real(Kitti 2012,Kitti 2015)数据集的广泛实验来验证颞叶。我们的模型可以在任何这些数据集上实现最先进的性能。代码可在\ url {https://github.com/youmi-zym/temporalstereo.git}中获得。

We present TemporalStereo, a coarse-to-fine stereo matching network that is highly efficient, and able to effectively exploit the past geometry and context information to boost matching accuracy. Our network leverages sparse cost volume and proves to be effective when a single stereo pair is given. However, its peculiar ability to use spatio-temporal information across stereo sequences allows TemporalStereo to alleviate problems such as occlusions and reflective regions while enjoying high efficiency also in this latter case. Notably, our model -- trained once with stereo videos -- can run in both single-pair and temporal modes seamlessly. Experiments show that our network relying on camera motion is robust even to dynamic objects when running on videos. We validate TemporalStereo through extensive experiments on synthetic (SceneFlow, TartanAir) and real (KITTI 2012, KITTI 2015) datasets. Our model achieves state-of-the-art performance on any of these datasets. Code is available at \url{https://github.com/youmi-zym/TemporalStereo.git}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源