论文标题

速度:针对有向无环图的可行计算编码器

PACE: A Parallelizable Computation Encoder for Directed Acyclic Graphs

论文作者

Dong, Zehao, Zhang, Muhan, Li, Fuhai, Chen, Yixin

论文摘要

对有向无环图(DAG)结构的优化具有许多应用,例如神经体系结构搜索(NAS)和概率图形模型学习。在大多数基于神经网络的DAG优化框架中,将DAG编码为真实向量是主要组成部分。当前,大多数DAG编码器都使用异步消息传递方案,该方案根据DAG中的节点之间的依赖性顺序处理节点。也就是说,在处理所有前辈之前,不得处理一个节点。结果,它们本质上是无法平行的。在这项工作中,我们提出了一个可行的基于注意力的计算结构编码器(PACE),该计算结构(PACE)同时处理节点并并行编码DAG。我们通过依赖编码器的优化子例程来证明PACE的优越性,这些子例程搜索了基于学习的DAG嵌入的最佳DAG结构。实验表明,PACE不仅可以通过显着提高训练和推理速度来提高先前的顺序DAG编码,而且还产生了对下游优化子例程有益的平滑潜在(DAG编码)空间。我们的源代码可在\ url {https://github.com/zehao-dong/pace}中找到

Optimization of directed acyclic graph (DAG) structures has many applications, such as neural architecture search (NAS) and probabilistic graphical model learning. Encoding DAGs into real vectors is a dominant component in most neural-network-based DAG optimization frameworks. Currently, most DAG encoders use an asynchronous message passing scheme which sequentially processes nodes according to the dependency between nodes in a DAG. That is, a node must not be processed until all its predecessors are processed. As a result, they are inherently not parallelizable. In this work, we propose a Parallelizable Attention-based Computation structure Encoder (PACE) that processes nodes simultaneously and encodes DAGs in parallel. We demonstrate the superiority of PACE through encoder-dependent optimization subroutines that search the optimal DAG structure based on the learned DAG embeddings. Experiments show that PACE not only improves the effectiveness over previous sequential DAG encoders with a significantly boosted training and inference speed, but also generates smooth latent (DAG encoding) spaces that are beneficial to downstream optimization subroutines. Our source code is available at \url{https://github.com/zehao-dong/PACE}

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源