论文标题
加强学习方法与随机和动态释放日期有关
Reinforcement Learning Approaches for the Orienteering Problem with Stochastic and Dynamic Release Dates
论文作者
论文摘要
在本文中,我们研究了与何时从中央仓库派出车辆以满足客户请求的车辆相关的电子商务承运人面临的顺序决策问题,并在该仓库到达仓库的时间是随机且动态的。目的是最大化在服务时间内可以交付的包裹的预期数量。我们提出了两种加强学习(RL)方法来解决此问题。这些方法依赖于一种前景策略,在该策略中,未来发布日期以蒙特卡洛的方式进行采样,并使用批处理方法来近似未来的路线。两种RL方法均基于值函数近似 - 一个方法将其与共有函数(VFA-CF)结合在一起,另一种将其与两级随机整数线性编程模型(VFA-2S)结合在一起。 VFA-CF和VFA-2不需要广泛的培训,因为它们基于极少数超参数,并且可以很好地利用整数线性编程(ILP)和基于分支机构的精确方法来提高决策质量。我们还建立了足够的条件,以部分特征的最佳策略并将其集成到VFA-CF/VFA-2S中。在一项实证研究中,我们使用上限进行竞争分析,并提供完美的信息。我们还表明,VFA-CF和VFA-2S极大地超过了:1)不依赖未来信息,或2)基于对未来信息的点估计,或3)采用启发式方法,而不是精确的方法,或者4)使用确切的未来回报评估。
In this paper, we study a sequential decision-making problem faced by e-commerce carriers related to when to send out a vehicle from the central depot to serve customer requests, and in which order to provide the service, under the assumption that the time at which parcels arrive at the depot is stochastic and dynamic. The objective is to maximize the expected number of parcels that can be delivered during service hours. We propose two reinforcement learning (RL) approaches for solving this problem. These approaches rely on a look-ahead strategy in which future release dates are sampled in a Monte-Carlo fashion and a batch approach is used to approximate future routes. Both RL approaches are based on value function approximation - one combines it with a consensus function (VFA-CF) and the other one with a two-stage stochastic integer linear programming model (VFA-2S). VFA-CF and VFA-2S do not need extensive training as they are based on very few hyper-parameters and make good use of integer linear programming (ILP) and branch-and-cut-based exact methods to improve the quality of decisions. We also establish sufficient conditions for partial characterization of optimal policy and integrate them into VFA-CF/VFA-2S. In an empirical study, we conduct a competitive analysis using upper bounds with perfect information. We also show that VFA-CF and VFA-2S greatly outperform alternative approaches that: 1) do not rely on future information, or 2) are based on point estimation of future information, or 3) employ heuristics rather than exact methods, or 4) use exact evaluations of future rewards.