论文标题
平滑对在线和差异私人学习的分析
Smoothed Analysis of Online and Differentially Private Learning
论文作者
论文摘要
算法中对鲁棒性和隐私的实用和普遍需求激发了在线对抗和差异私人学习算法的设计。在这些设置中可学习的特征的主要数量是假设类别的小小的维度[Ben-David等,2009; Alon等,2019]。这种表征通常被解释为不可能的结果,因为线性阈值和神经网络等类具有无限的小点。在本文中,我们应用了平滑分析的框架[Spielman和Teng,2004年],其中对抗选择的输入本质上略有干扰。我们表明,平滑的对手比对对手的遗憾和错误保证可以更强大。特别是,我们获得的遗憾和隐私错误界限仅取决于VC维度和假设类别的括号数,以及扰动的幅度。
Practical and pervasive needs for robustness and privacy in algorithms have inspired the design of online adversarial and differentially private learning algorithms. The primary quantity that characterizes learnability in these settings is the Littlestone dimension of the class of hypotheses [Ben-David et al., 2009, Alon et al., 2019]. This characterization is often interpreted as an impossibility result because classes such as linear thresholds and neural networks have infinite Littlestone dimension. In this paper, we apply the framework of smoothed analysis [Spielman and Teng, 2004], in which adversarially chosen inputs are perturbed slightly by nature. We show that fundamentally stronger regret and error guarantees are possible with smoothed adversaries than with worst-case adversaries. In particular, we obtain regret and privacy error bounds that depend only on the VC dimension and the bracketing number of a hypothesis class, and on the magnitudes of the perturbations.