论文标题

在网络游戏中最大化抗异常的平均次数

Average submodularity of maximizing anticoordination in network games

论文作者

Das, Soham, Eksin, Ceyhun

论文摘要

我们考虑对反协调网络游戏中代理的分散学习动力的控制。在反协调网络游戏中,在没有邻居的行动的情况下会采取首选动作,而代理商从首选动作中收到的实用程序会减少,因为更多的邻居选择了首选动作,可能会导致代理人选择不太希望的动作。基于迭代的消除主导策略的分散动力汇聚为考虑的游戏。给定收敛的动作曲线,我们通过基础图中的边缘数来测量抗协调数,这些边缘在边缘的任何一端都至少具有一个不采取首选动作的末端。最大的抗协调问题(MAC)问题试图找到一组最佳的代理,以在有限的预算下控制,以使整个网络断开连接最大化,因为动态导致了游戏收敛。我们表明,MAC在密集的两部分网络中的期望值是suppodular,以实现人群中的公用事业常数。利用此结果,我们获得了Mac的贪婪代理选择算法的性能保证。最后,我们提供了一项计算研究,以显示贪婪节点选择策略的有效性,以解决一般的两部分网络上的MAC。

We consider the control of decentralized learning dynamics for agents in an anti-coordination network game. In the anti-coordination network game, there is a preferred action in the absence of neighbors' actions, and the utility an agent receives from the preferred action decreases as more of its neighbors select the preferred action, potentially causing the agent to select a less desirable action. The decentralized dynamics that is based on the iterated elimination of dominated strategies converge for the considered game. Given a convergent action profile, we measure anti-coordination by the number of edges in the underlying graph that have at least one agent in either end of the edge not taking the preferred action. The maximum anti-coordination (MAC) problem seeks to find an optimal set of agents to control under a finite budget so that the overall network disconnect is maximized on game convergence as a result of the dynamics. We show that the MAC is submodular in expectation in dense bipartite networks for any realization of the utility constants in the population. Utilizing this result, we obtain a performance guarantee for the greedy agent selection algorithm for MAC. Finally, we provide a computational study to show the effectiveness of greedy node selection strategies to solve MAC on general bipartite networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源