论文标题
积分回报的策略复杂性,平均收益和可数MDP中的总收益目标
Strategy Complexity of Point Payoff, Mean Payoff and Total Payoff Objectives in Countable MDPs
论文作者
论文摘要
我们研究了具有实价过渡奖励的无限马尔可夫决策过程(MDP)。每次无限运行都会引起以下收益序列:1。点收益(直接看到过渡奖励的序列),2。平均收益(到目前为止所有奖励的总和的序列,除以步骤数量)和3。总收益(到目前为止所有奖励的总和的序列)。对于每种收益类型,目标是最大化$ \ liminf $不负的概率。我们建立了这些目标的策略复杂性的完整图片,即,对于$ \ varepsilon $ - 最佳(最佳)策略,需要多少内存和足够的记忆。有些情况可以通过无记忆的确定性策略赢得,而另一些情况则需要台阶计数器,奖励计数器或两者兼而有之。
We study countably infinite Markov decision processes (MDPs) with real-valued transition rewards. Every infinite run induces the following sequences of payoffs: 1. Point payoff (the sequence of directly seen transition rewards), 2. Mean payoff (the sequence of the sums of all rewards so far, divided by the number of steps), and 3. Total payoff (the sequence of the sums of all rewards so far). For each payoff type, the objective is to maximize the probability that the $\liminf$ is non-negative. We establish the complete picture of the strategy complexity of these objectives, i.e., how much memory is necessary and sufficient for $\varepsilon$-optimal (resp. optimal) strategies. Some cases can be won with memoryless deterministic strategies, while others require a step counter, a reward counter, or both.