论文标题

Riemannian近端政策优化

Riemannian Proximal Policy Optimization

论文作者

Wang, Shijun, Zhu, Baocheng, Li, Chen, Wu, Mingzhe, Zhang, James, Chu, Wei, Qi, Yuan

论文摘要

在本文中,我们提出了一种一般的Riemannian近端优化算法,并保证收敛以解决马尔可夫决策过程(MDP)问题。为了模拟MDP的策略功能,我们采用高斯混合模型(GMM),并将其作为阳性半足质矩阵的Riemannian空间中的非凸优化问题提出。对于两个给定的策略功能,我们还使用从GMM的Wasserstein距离得出的界限来提供其在政策改进方面的下限。初步实验显示了我们提出的Riemannian近端政策优化算法的功效。

In this paper, We propose a general Riemannian proximal optimization algorithm with guaranteed convergence to solve Markov decision process (MDP) problems. To model policy functions in MDP, we employ Gaussian mixture model (GMM) and formulate it as a nonconvex optimization problem in the Riemannian space of positive semidefinite matrices. For two given policy functions, we also provide its lower bound on policy improvement by using bounds derived from the Wasserstein distance of GMMs. Preliminary experiments show the efficacy of our proposed Riemannian proximal policy optimization algorithm.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源