论文标题
在线逻辑回归的有效学习不当
Efficient improper learning for online logistic regression
论文作者
论文摘要
我们考虑在线逻辑回归的设置,并考虑对半径B的2球。已知(参见[Hazan等,2014]),任何适当的算法都具有对数的样本数量(表示n)的对数遗憾(n)的适当算法,必须在此工作中呈现一定的高度乘积,我们会在此工作中置于良好的速度,以避免有效的速度,以避免有效的速度,以避免有效的效率,以避免有效的效率,以避免有效的效率,以避免有效的效率,以避免有效的效率,以避免使用高效的algoper algoper algoper algoper, 后悔。的确,[Foster等,2018]表明,下限不适用于不当算法,并提出了基于指数级的计算复杂性的指数权重的策略。我们的新算法基于正规的经验风险最小化和替代损失的最小化,使遗憾的是O(b log(bn)),并具有O(d^2)的每轮时间复杂性。
We consider the setting of online logistic regression and consider the regret with respect to the 2-ball of radius B. It is known (see [Hazan et al., 2014]) that any proper algorithm which has logarithmic regret in the number of samples (denoted n) necessarily suffers an exponential multiplicative constant in B. In this work, we design an efficient improper algorithm that avoids this exponential constant while preserving a logarithmic regret. Indeed, [Foster et al., 2018] showed that the lower bound does not apply to improper algorithms and proposed a strategy based on exponential weights with prohibitive computational complexity. Our new algorithm based on regularized empirical risk minimization with surrogate losses satisfies a regret scaling as O(B log(Bn)) with a per-round time-complexity of order O(d^2).