论文标题

基于能量的惊喜最小化多代理价值分解

Energy-based Surprise Minimization for Multi-Agent Value Factorization

论文作者

Suri, Karush, Shi, Xiao Qi, Plataniotis, Konstantinos, Lawryshyn, Yuri

论文摘要

多代理增强学习(MARL)通过利用价值分解方法以集中式的方式表现出在分散方式训练分散政策方面取得了重大成功。但是,解决跨国公司和近似偏见的惊喜仍然是多代理设置的空缺问题。为了实现这一目标,我们介绍了基于能量的混合器(EMIX),这是一种算法,利用跨代理的能量最小化了惊喜。我们的贡献是三倍。 (1)EMIX在多代理的部分观察设置中引入了多种代理的一种新颖的惊喜最小化技术。 (2)EMIX强调了能源功能在MARL中的实际使用,并具有理论保证和能量操作员的实验验证。最后,(3)EMIX扩展了Maxmin Q-Learning,以解决MARL代理商之间的高估偏差。在对挑战性的Starcraft II微管理方案的研究中,EMIX表现出一致的稳定性能,以最小化多种惊喜。此外,我们的消融研究强调了基于能量的方案的必要性以及消除MARL中高估偏差的必要性。我们的EMIX的实现可以在karush17.github.io/emix-web/上找到。

Multi-Agent Reinforcement Learning (MARL) has demonstrated significant success in training decentralised policies in a centralised manner by making use of value factorization methods. However, addressing surprise across spurious states and approximation bias remain open problems for multi-agent settings. Towards this goal, we introduce the Energy-based MIXer (EMIX), an algorithm which minimizes surprise utilizing the energy across agents. Our contributions are threefold; (1) EMIX introduces a novel surprise minimization technique across multiple agents in the case of multi-agent partially-observable settings. (2) EMIX highlights a practical use of energy functions in MARL with theoretical guarantees and experiment validations of the energy operator. Lastly, (3) EMIX extends Maxmin Q-learning for addressing overestimation bias across agents in MARL. In a study of challenging StarCraft II micromanagement scenarios, EMIX demonstrates consistent stable performance for multiagent surprise minimization. Moreover, our ablation study highlights the necessity of the energy-based scheme and the need for elimination of overestimation bias in MARL. Our implementation of EMIX can be found at karush17.github.io/emix-web/.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源