论文标题
雷达相机传感器融合,用于自动驾驶汽车的关节对象检测和距离估计
Radar-Camera Sensor Fusion for Joint Object Detection and Distance Estimation in Autonomous Vehicles
论文作者
论文摘要
在本文中,我们提出了一个新型的雷达相机传感器融合框架,以在自主驾驶场景中进行准确的对象检测和距离估计。所提出的体系结构采用中融合方法来融合雷达点云和RGB图像。我们的雷达对象提案网络使用雷达点云从一组3D先前的盒子中生成3D建议。这些建议被映射到图像,并将其馈入雷达提案改进(RPR)网络,以进行对象得分预测和盒子细化。 RPR网络利用雷达信息和图像特征图来生成准确的对象建议和距离估计。基于雷达的建议与修改的区域建议网络(RPN)生成的基于图像的建议相结合。 RPN具有一个距离回归层,用于估计每个生成的建议的距离。基于雷达的建议和基于图像的建议在下一阶段合并并用于对象分类。挑战性Nuscenes数据集的实验显示,我们的方法在2D对象检测任务中优于其他现有的雷达相机融合方法,同时准确地估算了对象的距离。
In this paper we present a novel radar-camera sensor fusion framework for accurate object detection and distance estimation in autonomous driving scenarios. The proposed architecture uses a middle-fusion approach to fuse the radar point clouds and RGB images. Our radar object proposal network uses radar point clouds to generate 3D proposals from a set of 3D prior boxes. These proposals are mapped to the image and fed into a Radar Proposal Refinement (RPR) network for objectness score prediction and box refinement. The RPR network utilizes both radar information and image feature maps to generate accurate object proposals and distance estimations. The radar-based proposals are combined with image-based proposals generated by a modified Region Proposal Network (RPN). The RPN has a distance regression layer for estimating distance for every generated proposal. The radar-based and image-based proposals are merged and used in the next stage for object classification. Experiments on the challenging nuScenes dataset show our method outperforms other existing radar-camera fusion methods in the 2D object detection task while at the same time accurately estimates objects' distances.