论文标题
MLGOPERF:由ML引导的Inliner来优化性能
MLGOPerf: An ML Guided Inliner to Optimize Performance
论文作者
论文摘要
在过去的25年中,我们目睹了机器学习在编译器领域的广泛应用。选择和相位订购问题。但是,有限的作品已在最先进的编译器(即LLVM)上游,以将前者无缝集成到编译器的优化管道中,以便由用户容易部署。 MLGO是此类项目的第一个项目之一,它仅努力使用强化学习使用基于ML的Inliner来减少二进制的代码大小。 本文介绍了mlgoperf;第一个端到端框架,能够使用LLVM的ML Inliner优化性能。它采用二级ML模型来生成用于训练重新定位的增强学习代理的奖励,该辅助剂以前由MLGO用作主要模型。通过预测分析函数的速度加速,它可以为主要模型提供快速训练框架,否则它将是不实际的。实验结果表明,MLGOPERF在LLVM在O3时的优化方面的优化分别能够分别在SPEC CPU2006和CBENCH基准上获得1.8%和2.2%。此外,提出的方法为我们的基准提供了多达26%的自动码区域的机会,可以将其转化为额外的3.7%速度值。
For the past 25 years, we have witnessed an extensive application of Machine Learning to the Compiler space; the selection and the phase-ordering problem. However, limited works have been upstreamed into the state-of-the-art compilers, i.e., LLVM, to seamlessly integrate the former into the optimization pipeline of a compiler to be readily deployed by the user. MLGO was among the first of such projects and it only strives to reduce the code size of a binary with an ML-based Inliner using Reinforcement Learning. This paper presents MLGOPerf; the first end-to-end framework capable of optimizing performance using LLVM's ML-Inliner. It employs a secondary ML model to generate rewards used for training a retargeted Reinforcement learning agent, previously used as the primary model by MLGO. It does so by predicting the post-inlining speedup of a function under analysis and it enables a fast training framework for the primary model which otherwise wouldn't be practical. The experimental results show MLGOPerf is able to gain up to 1.8% and 2.2% with respect to LLVM's optimization at O3 when trained for performance on SPEC CPU2006 and Cbench benchmarks, respectively. Furthermore, the proposed approach provides up to 26% increased opportunities to autotune code regions for our benchmarks which can be translated into an additional 3.7% speedup value.