论文标题
tempi:一个插入的MPI库,具有CUDA-AWARE DATATYTY的规范表示
TEMPI: An Interposed MPI Library with a Canonical Representation of CUDA-aware Datatypes
论文作者
论文摘要
MPI派生的数据类型是一种抽象,可简化MPI应用程序中非连续数据的处理。这些数据类型是在运行时递归构建的,该数据类型是MPI标准中定义的原始类型。最近,CUDA-AWARE MPI实施的开发和部署鼓励分布式高性能MPI代码过渡到使用GPU。此类实现使MPI功能可以直接在GPU缓冲区上运行,从而简化了GPU计算的集成到MPI代码中。这项工作首先提出了一种新颖的数据类型处理策略,用于嵌套的数据类型,该策略在先前的工作中发现了专业或通用处理之间的中间立场。这项工作还表明,非连续数据处理的性能特征可以通过经验系统测量进行建模,并用于透明地改善MPI_SEND/RECV延迟。最后,尽管非常关注非连续的GPU数据和CUDA-MPI实现,但不能认为良好的绩效是理所当然的。这项工作通过MPI插音器库Tempi展示了其贡献。 TEMPI可以与现有的MPI部署无需系统或应用程序更改。最终,这项工作的上图模型与在领导力类超级计算机上部署的MPI实现相比,MPI_PACK速度高达242000 x,MPI_SEND加速度高达59000x。这在3D光晕交换中的加速度超过917倍,并具有3072个过程。
MPI derived datatypes are an abstraction that simplifies handling of non-contiguous data in MPI applications. These datatypes are recursively constructed at runtime from primitive Named Types defined in the MPI standard. More recently, the development and deployment of CUDA-aware MPI implementations has encouraged the transition of distributed high-performance MPI codes to use GPUs. Such implementations allow MPI functions to directly operate on GPU buffers, easing integration of GPU compute into MPI codes. This work first presents a novel datatype handling strategy for nested strided datatypes, which finds a middle ground between the specialized or generic handling in prior work. This work also shows that the performance characteristics of non-contiguous data handling can be modeled with empirical system measurements, and used to transparently improve MPI_Send/Recv latency. Finally, despite substantial attention to non-contiguous GPU data and CUDA-aware MPI implementations, good performance cannot be taken for granted. This work demonstrates its contributions through an MPI interposer library, TEMPI. TEMPI can be used with existing MPI deployments without system or application changes. Ultimately, the interposed-library model of this work demonstrates MPI_Pack speedup of up to 242000x and MPI_Send speedup of up to 59000x compared to the MPI implementation deployed on a leadership-class supercomputer. This yields speedup of more than 917x in a 3D halo exchange with 3072 processes.