论文标题
LSVL:无人机的大规模季节不变的视觉定位
LSVL: Large-scale season-invariant visual localization for UAVs
论文作者
论文摘要
自主无人驾驶汽车(UAV)的定位在很大程度上依赖于全球导航卫星系统(GNSS),这些卫星系统容易受到干扰。尤其是在安全应用程序中,需要在干扰条件下提供可靠的自主无人机的可靠操作来提供可靠的自主无人机操作。典型的非GNSS视觉定位方法取决于已知的起始姿势,仅在小型地图上工作或在任务开始之前需要已知的飞行路径。我们考虑本地化问题,没有有关初始姿势或计划的飞行路径的信息。我们根据匹配的正面注射的无人机图像,使用学识渊博的季节不变的描述符基于匹配的正面的无人机图像,以最高100 km2的规模上的地图上的全局视觉定位解决方案。我们表明,该方法能够在不知情的初始化中以23.2-44.4的更新为23.2-44.4在12.6-18.7 m的横向翻译误差下确定无人机的标题,纬度和经度,在UAV图像和地图之间也有很大的季节性外观差异(冬季 - 萨姆)。我们评估了多个神经网络体系结构的特征,以生成描述符,并评估能够提供快速收敛和较低定位误差的可能性估计方法。我们还使用真实无人机数据评估算法的操作,并在实时嵌入式平台上评估运行时间。我们认为,这是能够在这种规模和收敛速度下恢复无人机姿势的第一部作品,同时允许摄像机观测和地图之间有显着的季节性差异。
Localization of autonomous unmanned aerial vehicles (UAVs) relies heavily on Global Navigation Satellite Systems (GNSS), which are susceptible to interference. Especially in security applications, robust localization algorithms independent of GNSS are needed to provide dependable operations of autonomous UAVs also in interfered conditions. Typical non-GNSS visual localization approaches rely on known starting pose, work only on a small-sized map, or require known flight paths before a mission starts. We consider the problem of localization with no information on initial pose or planned flight path. We propose a solution for global visual localization on a map at scale up to 100 km2, based on matching orthoprojected UAV images to satellite imagery using learned season-invariant descriptors. We show that the method is able to determine heading, latitude and longitude of the UAV at 12.6-18.7 m lateral translation error in as few as 23.2-44.4 updates from an uninformed initialization, also in situations of significant seasonal appearance difference (winter-summer) between the UAV image and the map. We evaluate the characteristics of multiple neural network architectures for generating the descriptors, and likelihood estimation methods that are able to provide fast convergence and low localization error. We also evaluate the operation of the algorithm using real UAV data and evaluate running time on a real-time embedded platform. We believe this is the first work that is able to recover the pose of an UAV at this scale and rate of convergence, while allowing significant seasonal difference between camera observations and map.