论文标题

在线缓存毫不遗憾:通过建议的乐观学习

Online Caching with no Regret: Optimistic Learning via Recommendations

论文作者

Mhaisen, Naram, Iosifidis, George, Leith, Douglas

论文摘要

对于内容分销网络,在线社交网络和边缘计算服务等方面,有效的在线缓存政策的设计是一个越来越重要的问题。本文提出了一种新的算法工具箱,用于通过\ emph {Optibistic}在线学习的镜头来解决此问题。我们基于以下规范化领导者(FTRL)框架,该框架进一步开发,以包括对文件请求的预测,我们设计了针对具有预保留或动态存储的两部分网络的在线缓存算法,该算法对时间A的预算预算约束。这些预测是由内容建议系统提供的,该系统会影响用户查看活动,因此自然可以减少缓存网络对未来请求的不确定性。在许多可用的情况下,我们还扩展了框架以学习和利用最佳请求预测指标。我们证明,提出的{乐观}学习缓存策略可以实现\ emph {sub-Zero}性能损失(遗憾),并保持完美的预测,并维持$ O(\ sqrt t)$的次线性遗憾,这是对不使用预测的策略的最佳实现,即使是对任意预测进行预测。通过详细的痕量驱动数值测试评估所提出的算法的性能。

The design of effective online caching policies is an increasingly important problem for content distribution networks, online social networks and edge computing services, among other areas. This paper proposes a new algorithmic toolbox for tackling this problem through the lens of \emph{optimistic} online learning. We build upon the Follow-the-Regularized-Leader (FTRL) framework, which is developed further here to include predictions for the file requests, and we design online caching algorithms for bipartite networks with pre-reserved or dynamic storage subject to time-average budget constraints. The predictions are provided by a content recommendation system that influences the users viewing activity and hence can naturally reduce the caching network's uncertainty about future requests. We also extend the framework to learn and utilize the best request predictor in cases where many are available. We prove that the proposed {optimistic} learning caching policies can achieve \emph{sub-zero} performance loss (regret) for perfect predictions, and maintain the sub-linear regret bound $O(\sqrt T)$, which is the best achievable bound for policies that do not use predictions, even for arbitrary-bad predictions. The performance of the proposed algorithms is evaluated with detailed trace-driven numerical tests.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源