论文标题

有效的马尔可夫决策过程的有效政策迭代通过正则化

Efficient Policy Iteration for Robust Markov Decision Processes via Regularization

论文作者

Kumar, Navdeep, Levy, Kfir, Wang, Kaixin, Mannor, Shie

论文摘要

强大的马尔可夫决策过程(MDP)为建模系统动态发生变化或仅部分已知的决策问题提供了一个通用框架。某些\ texttt {sa} -rectangular robust MDP的有效方法,使用其等效性与奖励正规MDP,可推广到在线设置。与\ texttt {sa} -rectangular robust MDPS相比,\ texttt {s} -rectangular robust robust MDPS的限制性较小,但更难处理。有趣的是,最近的作品已经确定了\ texttt {s} - 矩形鲁棒MDP与策略正规MDP之间的等价性。但是,我们没有明确的理解来利用这一等价性,而是执行策略改进步骤以获得最佳的价值函数或策略。我们对贪婪/最佳政策没有清楚的了解,除非它是随机的。没有可以自然将其推广到无模型设置的方法。我们在\ texttt {s} -rectangular $ l_p $ robust MDP和策略正规MDP之间显示出明确而明确的等效性,类似于策略熵正规化的MDP,在实践中广泛使用。此外,我们深入研究了策略改进步骤,并为\ texttt {s} -retectangular $ l_p $ robust MDPS提供了最佳的强大钟形操作员。我们发现,\ texttt {s} - retectAngular $ l_p $ robust MDPS中的贪婪/最佳政策是阈值策略,其构成了$ q $ value的$ q $ value的top $ k $ actions,其$ q $ value大于某个阈值(value),与$(p-1)$(p-1)$(p-1)$ the $(p-1)$ th的功率。此外,我们还显示(\ texttt {sa}和\ texttt {s} -rectangular)$ l_p $ robust mdps与非稳定MDP相同,直到某些日志因子与非固定MDP相同。我们的工作大大扩展了对\ texttt {s}的现有理解 - 矩形强大的MDP,并且自然可以推广到在线设置。

Robust Markov decision processes (MDPs) provide a general framework to model decision problems where the system dynamics are changing or only partially known. Efficient methods for some \texttt{sa}-rectangular robust MDPs exist, using its equivalence with reward regularized MDPs, generalizable to online settings. In comparison to \texttt{sa}-rectangular robust MDPs, \texttt{s}-rectangular robust MDPs are less restrictive but much more difficult to deal with. Interestingly, recent works have established the equivalence between \texttt{s}-rectangular robust MDPs and policy regularized MDPs. But we don't have a clear understanding to exploit this equivalence, to do policy improvement steps to get the optimal value function or policy. We don't have a clear understanding of greedy/optimal policy except it can be stochastic. There exist no methods that can naturally be generalized to model-free settings. We show a clear and explicit equivalence between \texttt{s}-rectangular $L_p$ robust MDPs and policy regularized MDPs that resemble very much policy entropy regularized MDPs widely used in practice. Further, we dig into the policy improvement step and concretely derive optimal robust Bellman operators for \texttt{s}-rectangular $L_p$ robust MDPs. We find that the greedy/optimal policies in \texttt{s}-rectangular $L_p$ robust MDPs are threshold policies that play top $k$ actions whose $Q$ value is greater than some threshold (value), proportional to the $(p-1)$th power of its advantage. In addition, we show time complexity of (\texttt{sa} and \texttt{s}-rectangular) $L_p$ robust MDPs is the same as non-robust MDPs up to some log factors. Our work greatly extends the existing understanding of \texttt{s}-rectangular robust MDPs and naturally generalizable to online settings.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源