论文标题
通过不确定性吸引对称网络进行快速分割
Fast Road Segmentation via Uncertainty-aware Symmetric Network
论文作者
论文摘要
基于RGB-D的道路分割方法的高性能与它们在商业自主驾驶中的罕见应用形成对比,这是由于两个原因:1)先前的方法在这两种方面都无法达到高推理速度和高精度; 2)RGB和深度数据的不同属性未充分探索,从而限制了预测道路的可靠性。在本文中,基于证据理论,提出了一个不确定性感知的对称网络(USNET),以通过完全融合RGB和深度数据来实现速度和准确性之间的权衡。首先,在以前的基于RGB-D的方法中必不可少的跨模式特征融合操作被放弃了。相反,我们分别采用两个轻巧的子网,从RGB和深度输入中学习道路表示。轻重量结构保证了我们方法的实时推断。此外,多尺度证据收集(MEC)模块旨在为每种模式以多个尺度收集证据,这为像素类确定提供了足够的证据。最后,在不确定性感知融合(UAF)模块中,可以感知每种模态的不确定性来指导两个子网的融合。实验结果表明,我们的方法以43+ fps的实时推理速度达到了最先进的精度。源代码可在https://github.com/morancyc/usnet上找到。
The high performance of RGB-D based road segmentation methods contrasts with their rare application in commercial autonomous driving, which is owing to two reasons: 1) the prior methods cannot achieve high inference speed and high accuracy in both ways; 2) the different properties of RGB and depth data are not well-exploited, limiting the reliability of predicted road. In this paper, based on the evidence theory, an uncertainty-aware symmetric network (USNet) is proposed to achieve a trade-off between speed and accuracy by fully fusing RGB and depth data. Firstly, cross-modal feature fusion operations, which are indispensable in the prior RGB-D based methods, are abandoned. We instead separately adopt two light-weight subnetworks to learn road representations from RGB and depth inputs. The light-weight structure guarantees the real-time inference of our method. Moreover, a multiscale evidence collection (MEC) module is designed to collect evidence in multiple scales for each modality, which provides sufficient evidence for pixel class determination. Finally, in uncertainty-aware fusion (UAF) module, the uncertainty of each modality is perceived to guide the fusion of the two subnetworks. Experimental results demonstrate that our method achieves a state-of-the-art accuracy with real-time inference speed of 43+ FPS. The source code is available at https://github.com/morancyc/USNet.