论文标题
几种错误的在线学习型号的匪徒反馈价格的尖锐范围
Sharp bounds on the price of bandit feedback for several models of mistake-bounded online learning
论文作者
论文摘要
我们确定了有可能存在的错误模型的几种变体的匪徒反馈价格的尖锐界限。本文的第一部分介绍了$ r $输入的弱加强模型和$ r $输入的延迟,模棱两可的增强模型。在这两种模型中,对手在每回合中均提供$ r $输入,并且只有在所有$ r $猜测都正确的情况下才能表明正确的答案。这两个模型之间的唯一区别是,在延迟的,模棱两可的模型中,学习者必须在收到回合的下一个输入之前回答每个输入,而学习者在弱强化模型中立即接收所有$ r $输入。在本文的第二部分中,我们介绍了使用置换模式的在线学习模型,其中学习者试图通过猜测与子渗透有关的统计数据来从一组排列中学习排列。对于这些置换模型,我们证明了匪徒反馈价格的尖锐范围。
We determine sharp bounds on the price of bandit feedback for several variants of the mistake-bound model. The first part of the paper presents bounds on the $r$-input weak reinforcement model and the $r$-input delayed, ambiguous reinforcement model. In both models, the adversary gives $r$ inputs in each round and only indicates a correct answer if all $r$ guesses are correct. The only difference between the two models is that in the delayed, ambiguous model, the learner must answer each input before receiving the next input of the round, while the learner receives all $r$ inputs at once in the weak reinforcement model. In the second part of the paper, we introduce models for online learning with permutation patterns, in which a learner attempts to learn a permutation from a set of permutations by guessing statistics related to sub-permutations. For these permutation models, we prove sharp bounds on the price of bandit feedback.