论文标题
在复杂的背景下透明物体抓住的视觉效果融合
Visual-tactile Fusion for Transparent Object Grasping in Complex Backgrounds
论文作者
论文摘要
对透明物体的准确检测和掌握是具有挑战性的,但对机器人很重要。在这里,提出了一个视觉诱导的融合框架,用于在复杂的背景和变体的光条件下抓住透明对象,包括握把位置检测,触觉校准和基于视觉诱使融合的分类。首先,提出了具有基于高斯分布的数据注释的多场合合成抓地数据集生成方法。此外,提出了一个名为TGCNN的新型握把网络用于抓住位置检测,在合成场景和真实场景中都显示出良好的结果。在触觉校准的启发下,设计了完全卷积网络的触觉特征提取方法和基于中央位置的自适应抓地力策略,与直接抓握相比,成功率提高了36.7%。此外,为透明对象分类提出了一种视觉效果融合方法,该方法将分类精度提高了34%。提出的框架协同视觉和触摸的优势,并大大提高了透明物体的抓紧效率。
The accurate detection and grasping of transparent objects are challenging but of significance to robots. Here, a visual-tactile fusion framework for transparent object grasping under complex backgrounds and variant light conditions is proposed, including the grasping position detection, tactile calibration, and visual-tactile fusion based classification. First, a multi-scene synthetic grasping dataset generation method with a Gaussian distribution based data annotation is proposed. Besides, a novel grasping network named TGCNN is proposed for grasping position detection, showing good results in both synthetic and real scenes. In tactile calibration, inspired by human grasping, a fully convolutional network based tactile feature extraction method and a central location based adaptive grasping strategy are designed, improving the success rate by 36.7% compared to direct grasping. Furthermore, a visual-tactile fusion method is proposed for transparent objects classification, which improves the classification accuracy by 34%. The proposed framework synergizes the advantages of vision and touch, and greatly improves the grasping efficiency of transparent objects.