论文标题
关于解决合作的Marl问题,并有一些很好的经验
On Solving Cooperative MARL Problems with a Few Good Experiences
论文作者
论文摘要
合作多代理增强学习(MARL)对于在搜索和救援,无人机监视,包装交付和消防问题等许多领域的合作分散决策学习至关重要。在这些领域中,一个关键的挑战是通过一些好的经验学习,即,仅在几种情况下获得积极的增援(例如,在扑灭火灾或跟踪犯罪或交付包裹时),在大多数其他情况下,都有零或负面的强化。由于三个原因,在合作MARL问题中,以一些好的经验的学习决策在合作的MARL问题中极具挑战性。首先,与单个代理案例相比,探索更难,因为必须协调多个代理才能获得良好的体验。其次,由于所有代理商都在同一时间学习(因此改变政策),环境并不是静止的。第三,问题的规模随着每个额外的代理而大大增加。 相关的现有工作是广泛的,并且专注于处理单一AGENT RL问题或可扩展方法来处理MARL问题中的非平稳性的方法。不幸的是,这些方法(或其扩展)都无法有效地解决稀疏的良好经验问题。因此,我们提供了一种新颖的虚拟自模仿方法,能够以可扩展的方式同时处理非平稳性和稀疏的良好体验。最后,我们与相关的合作MARL算法进行了详尽的比较(实验性或描述性),以证明我们方法的实用性。
Cooperative Multi-agent Reinforcement Learning (MARL) is crucial for cooperative decentralized decision learning in many domains such as search and rescue, drone surveillance, package delivery and fire fighting problems. In these domains, a key challenge is learning with a few good experiences, i.e., positive reinforcements are obtained only in a few situations (e.g., on extinguishing a fire or tracking a crime or delivering a package) and in most other situations there is zero or negative reinforcement. Learning decisions with a few good experiences is extremely challenging in cooperative MARL problems due to three reasons. First, compared to the single agent case, exploration is harder as multiple agents have to be coordinated to receive a good experience. Second, environment is not stationary as all the agents are learning at the same time (and hence change policies). Third, scale of problem increases significantly with every additional agent. Relevant existing work is extensive and has focussed on dealing with a few good experiences in single-agent RL problems or on scalable approaches for handling non-stationarity in MARL problems. Unfortunately, neither of these approaches (or their extensions) are able to address the problem of sparse good experiences effectively. Therefore, we provide a novel fictitious self imitation approach that is able to simultaneously handle non-stationarity and sparse good experiences in a scalable manner. Finally, we provide a thorough comparison (experimental or descriptive) against relevant cooperative MARL algorithms to demonstrate the utility of our approach.