论文标题
DartsRepair:核心 - 失败设定的指导飞镖,用于网络鲁棒性与普通腐败
DARTSRepair: Core-failure-set Guided DARTS for Network Robustness to Common Corruptions
论文作者
论文摘要
网络体系结构搜索(NAS),尤其是可区分的体系结构搜索(DARTS)方法,已经显示出在特定的感兴趣数据集中学习出色的模型体系结构的强大力量。与使用固定的数据集相反,在这项工作中,我们专注于NAS的不同但重要的情况:如何完善部署的网络的模型体系结构,以增强其鲁棒性,以指导一些收集和错误分类的示例,这些示例被某些现实世界中的一些不知名的腐败模式降低了,具有特定的模式(例如,噪声,噪声,blurur等)。为此,我们首先进行了一项实证研究,以验证模型架构肯定与腐败模式有关。令人惊讶的是,通过仅添加一些损坏和错误分类的示例(例如,$ 10^3 $示例)到清洁培训数据集(例如,$ 5.0 \ times 10^4 $示例)中,我们可以完善模型体系结构并显着增强鲁棒性。为了使其更加实用,应仔细研究关键问题,即如何为有效的NAS指导选择适当的失败示例。然后,我们提出了一种新颖的核心失效指导飞镖,该飞镖将飞镖嵌入K-Center-Greedy算法以选择合适的损坏失败示例以完善模型体系结构。我们使用我们的方法在清洁和15个腐败上进行飞镖精制的DNN,并在四个特定的现实世界腐败的指导下进行了指导。与最先进的NAS以及基于数据的增强方法相比,我们的最终方法可以在损坏的数据集和原始清洁数据集上获得更高的精度。在某些腐败模式上,我们可以取得高达45%以上的绝对准确性提高。
Network architecture search (NAS), in particular the differentiable architecture search (DARTS) method, has shown a great power to learn excellent model architectures on the specific dataset of interest. In contrast to using a fixed dataset, in this work, we focus on a different but important scenario for NAS: how to refine a deployed network's model architecture to enhance its robustness with the guidance of a few collected and misclassified examples that are degraded by some real-world unknown corruptions having a specific pattern (e.g., noise, blur, etc.). To this end, we first conduct an empirical study to validate that the model architectures can be definitely related to the corruption patterns. Surprisingly, by just adding a few corrupted and misclassified examples (e.g., $10^3$ examples) to the clean training dataset (e.g., $5.0 \times 10^4$ examples), we can refine the model architecture and enhance the robustness significantly. To make it more practical, the key problem, i.e., how to select the proper failure examples for the effective NAS guidance, should be carefully investigated. Then, we propose a novel core-failure-set guided DARTS that embeds a K-center-greedy algorithm for DARTS to select suitable corrupted failure examples to refine the model architecture. We use our method for DARTS-refined DNNs on the clean as well as 15 corruptions with the guidance of four specific real-world corruptions. Compared with the state-of-the-art NAS as well as data-augmentation-based enhancement methods, our final method can achieve higher accuracy on both corrupted datasets and the original clean dataset. On some of the corruption patterns, we can achieve as high as over 45% absolute accuracy improvements.