论文标题
图神经网络土匪
Graph Neural Network Bandits
论文作者
论文摘要
我们考虑使用图形结构化数据定义的奖励功能的匪徒优化问题。这个问题在分子设计和药物发现中具有重要的应用,在图形排列中,奖励自然不变。这种设置的主要挑战是扩展到大型域,以及带有许多节点的图形。我们通过将置换不变性嵌入我们的模型来解决这些挑战。特别是,我们表明图形神经网络(GNN)可用于估计奖励函数,假设它位于置换不变的加性核的再现内核希尔伯特空间。通过在此类内核与图形神经切线内核(GNTK)之间建立新的联系,我们介绍了第一个GNN信心绑定,并使用它来设计带有sublinear遗憾的相位脱口算法。我们的遗憾约束取决于GNTK的最大信息增益,我们也为此提供了限制。虽然奖励功能取决于所有$ n $ node功能,但我们的保证与图形节点$ n $的数量无关。从经验上讲,我们的方法在图形结构域上表现出竞争性能,并表现得很好。
We consider the bandit optimization problem with the reward function defined over graph-structured data. This problem has important applications in molecule design and drug discovery, where the reward is naturally invariant to graph permutations. The key challenges in this setting are scaling to large domains, and to graphs with many nodes. We resolve these challenges by embedding the permutation invariance into our model. In particular, we show that graph neural networks (GNNs) can be used to estimate the reward function, assuming it resides in the Reproducing Kernel Hilbert Space of a permutation-invariant additive kernel. By establishing a novel connection between such kernels and the graph neural tangent kernel (GNTK), we introduce the first GNN confidence bound and use it to design a phased-elimination algorithm with sublinear regret. Our regret bound depends on the GNTK's maximum information gain, which we also provide a bound for. While the reward function depends on all $N$ node features, our guarantees are independent of the number of graph nodes $N$. Empirically, our approach exhibits competitive performance and scales well on graph-structured domains.