论文标题

TrackletMapper:地面分割和交通参与者轨迹的映射

TrackletMapper: Ground Surface Segmentation and Mapping from Traffic Participant Trajectories

论文作者

Zürn, Jannik, Weber, Sebastian, Burgard, Wolfram

论文摘要

对于与行人一起运行的移动机器人,诸如道路和街道越野之类的地面基础设施(例如道路和街道交叉路口)是一项重要任务。尽管许多语义分割数据集可用于自动驾驶汽车,但在此类数据集中训练的模型在部署在行人空间中的机器人上时表现出较大的域间隙。从行人角度录制的手动注释图像既昂贵又耗时。为了克服这一挑战,我们提出了TrackletMapper,这是一个注释地面类型的框架,例如人行道,道路和街道交叉点,而无需进行人类宣传的数据。为此,我们将机器人自我trajectory和其他交通参与者的路径投射到自我视图相机图像中,为多种类型的接地表面创建稀疏的语义注释,可以从中训练地面细分模型。我们进一步表明,该模型可以通过汇总地面图并将其投影到相机图像中,从而自行启动,从而获得更多的性能好处,从而与稀疏的踪迹注释相比,创建了一组密集的训练注释。我们在定性和定量上证明了我们在一个新型的大型数据集上,用于在行人区域运营的移动机器人。代码和数据集将在http://trackletmapper.cs.uni-freiburg.de上提供。

Robustly classifying ground infrastructure such as roads and street crossings is an essential task for mobile robots operating alongside pedestrians. While many semantic segmentation datasets are available for autonomous vehicles, models trained on such datasets exhibit a large domain gap when deployed on robots operating in pedestrian spaces. Manually annotating images recorded from pedestrian viewpoints is both expensive and time-consuming. To overcome this challenge, we propose TrackletMapper, a framework for annotating ground surface types such as sidewalks, roads, and street crossings from object tracklets without requiring human-annotated data. To this end, we project the robot ego-trajectory and the paths of other traffic participants into the ego-view camera images, creating sparse semantic annotations for multiple types of ground surfaces from which a ground segmentation model can be trained. We further show that the model can be self-distilled for additional performance benefits by aggregating a ground surface map and projecting it into the camera images, creating a denser set of training annotations compared to the sparse tracklet annotations. We qualitatively and quantitatively attest our findings on a novel large-scale dataset for mobile robots operating in pedestrian areas. Code and dataset will be made available at http://trackletmapper.cs.uni-freiburg.de.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源