论文标题
深度神经网络硬件加速器和模型的联合保护计划
Joint Protection Scheme for Deep Neural Network Hardware Accelerators and Models
论文作者
论文摘要
深层神经网络(DNN)用于许多图像处理,对象检测和视频分析任务,并需要使用硬件加速器来实现实现实际速度。逻辑锁定是防止芯片伪造的最流行的方法之一。然而,现有的逻辑锁定方案需要牺牲导致不正确键下错误输出的输入模式的数量,以抵制强大的满足性(SAT) - 攻击。此外,DNN模型推断具有容忍性。因此,对于那些耐SAT的逻辑锁定方案,使用错误的键可能不会影响DNN的准确性。这使得以前的SAT抗逻辑锁定方案无效地保护DNN加速器。此外,为了防止DNN模型非法使用,在将设计人员提供给最终用户之前,必须将模型混淆。以前的混淆方法要么需要长时间重新训练模型或泄漏有关模型的信息。本文提出了针对DNN硬件加速器和模型的联合保护方案。使用硬件密钥(HKEY)和模型密钥(MKEY)修改DNN加速器。与以前的逻辑锁定不同,用于保护加速器的HKEY在错误时不会影响输出。结果,可以有效抵抗SAT攻击。另一方面,错误的HKEY导致内存访问,推理时间和能耗大幅增加,并使加速器无法使用。正确的mkey可以恢复被提议的方法混淆的DNN模型。与以前的模型混淆方案相比,我们提出的方法避免了模型再培训,并且不会泄漏模型信息。
Deep neural networks (DNNs) are utilized in numerous image processing, object detection, and video analysis tasks and need to be implemented using hardware accelerators to achieve practical speed. Logic locking is one of the most popular methods for preventing chip counterfeiting. Nevertheless, existing logic-locking schemes need to sacrifice the number of input patterns leading to wrong output under incorrect keys to resist the powerful satisfiability (SAT)-attack. Furthermore, DNN model inference is fault-tolerant. Hence, using a wrong key for those SAT-resistant logic-locking schemes may not affect the accuracy of DNNs. This makes the previous SAT-resistant logic-locking scheme ineffective on protecting DNN accelerators. Besides, to prevent DNN models from being illegally used, the models need to be obfuscated by the designers before they are provided to end-users. Previous obfuscation methods either require long time to retrain the model or leak information about the model. This paper proposes a joint protection scheme for DNN hardware accelerators and models. The DNN accelerator is modified using a hardware key (Hkey) and a model key (Mkey). Different from previous logic locking, the Hkey, which is used to protect the accelerator, does not affect the output when it is wrong. As a result, the SAT attack can be effectively resisted. On the other hand, a wrong Hkey leads to substantial increase in memory accesses, inference time, and energy consumption and makes the accelerator unusable. A correct Mkey can recover the DNN model that is obfuscated by the proposed method. Compared to previous model obfuscation schemes, our proposed method avoids model retraining and does not leak model information.