论文标题

自身固体:安全神经网络推断的自动化层参数选择

AutoPrivacy: Automated Layer-wise Parameter Selection for Secure Neural Network Inference

论文作者

Lou, Qian, Bian, Song, Jiang, Lei

论文摘要

通过同构加密(HE)(HE)和非线性层实现线性层的混合隐私神经网络(HPPNN)是乱式电路(GC)的非线性层,是新兴的机器学习作为服务(MLAAS)的最有前途的安全解决方案之一。不幸的是,HPPNN患有较长的推理延迟,例如每张图像$ \ sim100 $秒,这使得Mlaas不令人满意。因为HPPNN的基于HE的线性层的价格为$ 93 \%$推理潜伏期,因此选择一组HE参数以最大程度地减少线性层的计算开销至关重要。以前,HPPNN过度使用巨大的参数以维持较大的噪声预算,因为它们在整个网络中使用相同的HE参数,而忽略了网络的错误公差能力。 在本文中,对于快速准确的安全神经网络推断,我们提出了一个自动化层的参数选择器Autroprofacy,它利用深度强化学习可以自动确定HPPNN中每个线性层的一组HE参数。基于学习的他的参数选择策略优于常规规则的参数选择策略。与先前的HPPNN相比,自动化优化的HPPNN将推断潜伏期降低了$ 53 \%\ sim70 \%$,而准确性丧失了。

Hybrid Privacy-Preserving Neural Network (HPPNN) implementing linear layers by Homomorphic Encryption (HE) and nonlinear layers by Garbled Circuit (GC) is one of the most promising secure solutions to emerging Machine Learning as a Service (MLaaS). Unfortunately, a HPPNN suffers from long inference latency, e.g., $\sim100$ seconds per image, which makes MLaaS unsatisfactory. Because HE-based linear layers of a HPPNN cost $93\%$ inference latency, it is critical to select a set of HE parameters to minimize computational overhead of linear layers. Prior HPPNNs over-pessimistically select huge HE parameters to maintain large noise budgets, since they use the same set of HE parameters for an entire network and ignore the error tolerance capability of a network. In this paper, for fast and accurate secure neural network inference, we propose an automated layer-wise parameter selector, AutoPrivacy, that leverages deep reinforcement learning to automatically determine a set of HE parameters for each linear layer in a HPPNN. The learning-based HE parameter selection policy outperforms conventional rule-based HE parameter selection policy. Compared to prior HPPNNs, AutoPrivacy-optimized HPPNNs reduce inference latency by $53\%\sim70\%$ with negligible loss of accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源