论文标题

用于跨不同领域的稳健变化检测的半锡安语网络,并具有3D打印的应用

Semi-Siamese Network for Robust Change Detection Across Different Domains with Applications to 3D Printing

论文作者

Niu, Yushuo, Chadwick, Ethan, Ma, Anson W. K., Yang, Qian

论文摘要

3D打印过程的自动缺陷检测(与变化检测问题具有许多特征)是对3D印刷产品质量控制的重要步骤。但是,在当前的实践状态下存在一些关键挑战。首先,现有的基于计算机视觉的过程监视的方法通常只有在特定的摄像机观点和照明情况下才能很好地工作,需要昂贵的预处理,对齐和相机设置。其次,许多缺陷检测技术特定于预定义的缺陷模式和/或打印原理图。在这项工作中,我们使用一种新型的半塞亚姆深度学习模型来解决缺陷检测问题,该模型直接比较了所需的印刷品的参考示意图和所达到的印刷品的相机图像。然后,该模型解决了图像分割问题,精确地识别了相对于参考示意图的不同类型缺陷的位置。我们的模型旨在比较来自不同域的异质图像,同时与成像设置中的扰动(例如不同的摄像机角度和照明)中的扰动相比。至关重要的是,我们表明我们的简单体系结构很容易预先培训,以增强新数据集的性能,优于基于生成的对抗性网络和变形金刚的更复杂的最新方法。使用我们的模型,使用标准MACBook Pro以每层不到半秒的速度进行缺陷定位预测,同时达到了超过0.9的F1分数,这证明了在3D打印中使用我们的方法在3D打印中使用我们的原位缺陷检测的功效。

Automatic defect detection for 3D printing processes, which shares many characteristics with change detection problems, is a vital step for quality control of 3D printed products. However, there are some critical challenges in the current state of practice. First, existing methods for computer vision-based process monitoring typically work well only under specific camera viewpoints and lighting situations, requiring expensive pre-processing, alignment, and camera setups. Second, many defect detection techniques are specific to pre-defined defect patterns and/or print schematics. In this work, we approach the defect detection problem using a novel Semi-Siamese deep learning model that directly compares a reference schematic of the desired print and a camera image of the achieved print. The model then solves an image segmentation problem, precisely identifying the locations of defects of different types with respect to the reference schematic. Our model is designed to enable comparison of heterogeneous images from different domains while being robust against perturbations in the imaging setup such as different camera angles and illumination. Crucially, we show that our simple architecture, which is easy to pre-train for enhanced performance on new datasets, outperforms more complex state-of-the-art approaches based on generative adversarial networks and transformers. Using our model, defect localization predictions can be made in less than half a second per layer using a standard MacBook Pro while achieving an F1-score of more than 0.9, demonstrating the efficacy of using our method for in-situ defect detection in 3D printing.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源