论文标题

对抗重新编程进行了重新审视

Adversarial Reprogramming Revisited

论文作者

Englert, Matthias, Lazic, Ranko

论文摘要

由Elsayed,Goodfellow和Sohl-Dickstein引入的对抗性重编程,试图通过操纵其输入而无需修改其权重来重新利用神经网络来执行不同的任务。我们证明,可以对具有随机权重的两层relu神经网络进行对流重编程,以便在伯诺利数据模型上与超立方体顶点获得任意高精度,前提是网络宽度不超过其输入维度。我们还基本上增强了phuong和Lampert对梯度流的定向收敛的最新结果,并作为必然的,即在正交可分开的数据集中训练两层晶状神经网络可能会导致其对抗性重编程失败。我们通过实验来支持这些理论结果,这些实验表明,只要批处理标准化层是适当初始化的,即使没有随机权重的未经训练的网络也容易受到对抗重编程的影响。这与最近的几部作品中的观察结果相反,这些观察表明,在任何可靠性的情况下,未经训练的网络都无法进行对抗重编程。

Adversarial reprogramming, introduced by Elsayed, Goodfellow, and Sohl-Dickstein, seeks to repurpose a neural network to perform a different task, by manipulating its input without modifying its weights. We prove that two-layer ReLU neural networks with random weights can be adversarially reprogrammed to achieve arbitrarily high accuracy on Bernoulli data models over hypercube vertices, provided the network width is no greater than its input dimension. We also substantially strengthen a recent result of Phuong and Lampert on directional convergence of gradient flow, and obtain as a corollary that training two-layer ReLU neural networks on orthogonally separable datasets can cause their adversarial reprogramming to fail. We support these theoretical results by experiments that demonstrate that, as long as batch normalisation layers are suitably initialised, even untrained networks with random weights are susceptible to adversarial reprogramming. This is in contrast to observations in several recent works that suggested that adversarial reprogramming is not possible for untrained networks to any degree of reliability.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源