论文标题
使用卷积神经网络的辅助相对姿势估计在轨道组件上
Assistive Relative Pose Estimation for On-orbit Assembly using Convolutional Neural Networks
论文作者
论文摘要
空间中航天器或物体的准确实时姿势估计是轨道上的航天器维修和组装任务所需的关键功能。由于含有广泛变化的照明条件,高对比度和分辨率差的空间图像,空间中物体的姿势估计比地球上的物体更具挑战性。在本文中,利用卷积神经网络来唯一确定感兴趣对象相对于相机的翻译和旋转。使用CNN模型的主要思想是在空间组装任务上使用仅基于功能的方法始终不够的空间组装任务中使用的对象跟踪器。为组装任务设计的仿真框架用于生成用于训练修改后的CNN模型的数据集,然后将不同模型的结果与测量模型预测姿势的准确度量进行了比较。与空间姿势估计中的许多当前航天器或对象的方法不同,该模型不依赖于手工制作的特定于对象的特征,这使得该模型更强大,更易于应用于其他类型的航天器。结果表明,该模型的性能与当前的特征选择方法相当,因此可以与它们结合使用以提供更可靠的估计。
Accurate real-time pose estimation of spacecraft or object in space is a key capability necessary for on-orbit spacecraft servicing and assembly tasks. Pose estimation of objects in space is more challenging than for objects on Earth due to space images containing widely varying illumination conditions, high contrast, and poor resolution in addition to power and mass constraints. In this paper, a convolutional neural network is leveraged to uniquely determine the translation and rotation of an object of interest relative to the camera. The main idea of using CNN model is to assist object tracker used in on space assembly tasks where only feature based method is always not sufficient. The simulation framework designed for assembly task is used to generate dataset for training the modified CNN models and, then results of different models are compared with measure of how accurately models are predicting the pose. Unlike many current approaches for spacecraft or object in space pose estimation, the model does not rely on hand-crafted object-specific features which makes this model more robust and easier to apply to other types of spacecraft. It is shown that the model performs comparable to the current feature-selection methods and can therefore be used in conjunction with them to provide more reliable estimates.