论文标题
隐性在线学习的时间变异性
Temporal Variability in Implicit Online Learning
论文作者
论文摘要
在在线学习的情况下,从实际角度来看,隐式算法是非常成功的。但是,最紧张的遗憾分析只会显示出比在线镜像下降的边缘改善。在这项工作中,我们阐明了这种行为,进行了仔细的遗憾分析。我们证明了一种新颖的静态遗憾结合,取决于损失函数顺序的时间变化,这是考虑动态竞争者时经常会遇到的数量。例如,我们表明,如果时间变化是恒定的,并且学习率适当地调节,而无需平滑损失,则遗憾可能是恒定的。此外,我们提出了一种自适应算法,该算法在没有时间变化的情况下实现了这种遗憾,并证明了匹配的下限。最后,我们验证了有关分类和回归数据集的理论发现。
In the setting of online learning, Implicit algorithms turn out to be highly successful from a practical standpoint. However, the tightest regret analyses only show marginal improvements over Online Mirror Descent. In this work, we shed light on this behavior carrying out a careful regret analysis. We prove a novel static regret bound that depends on the temporal variability of the sequence of loss functions, a quantity which is often encountered when considering dynamic competitors. We show, for example, that the regret can be constant if the temporal variability is constant and the learning rate is tuned appropriately, without the need of smooth losses. Moreover, we present an adaptive algorithm that achieves this regret bound without prior knowledge of the temporal variability and prove a matching lower bound. Finally, we validate our theoretical findings on classification and regression datasets.