论文标题
可区分的贪婪的少量最大化:保证,梯度估计器和应用
Differentiable Greedy Submodular Maximization: Guarantees, Gradient Estimators, and Applications
论文作者
论文摘要
例如,由敏感性分析和端到端学习的激励,对可区分优化算法的需求已大大增加。在本文中,我们建立了一个理论上保证的多功能框架,该框架使单调下调函数最大化的贪婪算法可区分。我们通过随机化平滑贪婪的算法,并证明它几乎可以恢复原始近似值,可以保证对基数和$κ$可延迟的系统约束。我们还展示了如何有效计算任何与任何预期输出相关量的无偏梯度估计器。我们通过将其实例化用于各种应用来证明我们的框架的实用性。
Motivated by, e.g., sensitivity analysis and end-to-end learning, the demand for differentiable optimization algorithms has been significantly increasing. In this paper, we establish a theoretically guaranteed versatile framework that makes the greedy algorithm for monotone submodular function maximization differentiable. We smooth the greedy algorithm via randomization, and prove that it almost recovers original approximation guarantees in expectation for the cases of cardinality and $κ$-extensible system constrains. We also show how to efficiently compute unbiased gradient estimators of any expected output-dependent quantities. We demonstrate the usefulness of our framework by instantiating it for various applications.