论文标题

通过幽灵开车:行为克隆以误报

Driving Through Ghosts: Behavioral Cloning with False Positives

论文作者

Bühler, Andreas, Gaidon, Adrien, Cramariuc, Andrei, Ambrus, Rares, Rosman, Guy, Burgard, Wolfram

论文摘要

安全的自主驾驶需要对其他交通参与者进行强有力的检测。但是,鲁棒并不意味着完美,安全系统通常以较高的假阳性率为代价最大程度地减少遗漏检测。这导致了保守而又潜在的危险行为,例如避免虚构的障碍。在行为克隆的背景下,训练时间的感知错误可能会导致学习困难或错误的政策,因为专家示范可能与感知到的世界国家不一致。在这项工作中,我们提出了一种行为克隆方法,可以安全地利用不保守的不完美感知。我们的核心贡献是学习计划的知觉不确定性的新颖代表。我们提出了一个新的概率鸟眼观看语义网格,以编码对象感知​​系统的嘈杂输出。然后,我们利用专家演示使用这种概率代表来学习模仿驾驶政策。使用Carla模拟器,我们表明我们的方法可以安全地克服关键的假阳性,否则可以导致灾难性的失败或保守行为。

Safe autonomous driving requires robust detection of other traffic participants. However, robust does not mean perfect, and safe systems typically minimize missed detections at the expense of a higher false positive rate. This results in conservative and yet potentially dangerous behavior such as avoiding imaginary obstacles. In the context of behavioral cloning, perceptual errors at training time can lead to learning difficulties or wrong policies, as expert demonstrations might be inconsistent with the perceived world state. In this work, we propose a behavioral cloning approach that can safely leverage imperfect perception without being conservative. Our core contribution is a novel representation of perceptual uncertainty for learning to plan. We propose a new probabilistic birds-eye-view semantic grid to encode the noisy output of object perception systems. We then leverage expert demonstrations to learn an imitative driving policy using this probabilistic representation. Using the CARLA simulator, we show that our approach can safely overcome critical false positives that would otherwise lead to catastrophic failures or conservative behavior.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源