论文标题

贝叶斯优化的内核脊回归的有效高参数调整

Efficient hyperparameter tuning for kernel ridge regression with Bayesian optimization

论文作者

Stuke, Annika, Rinke, Patrick, Todorović, Milica

论文摘要

机器学习方法通​​常取决于内部参数(所谓的超参数),需要优化以获得最佳性能。这种优化给机器学习从业人员带来了负担,需要专家知识,直觉或计算要求蛮力参数搜索。我们在这里解决了对贝叶斯优化的更高效,自动化的超参数选择的需求。我们将此技术应用于有机分子原子结构的两个不同描述符的内核脊回归机学习方法,其中一种将其自身的超参数集引入该方法。我们确定最佳的超参数配置,并推断超参数空间中的整个预测误差景观,这些误差景观是超参数依赖性的视觉指南。我们进一步证明,对于越来越多的超参数,贝叶斯优化在计算时间中的效率比详尽的网格搜索(当前默认的标准超参数搜索方法)更有效,同时提供了同等的甚至更高的精度。

Machine learning methods usually depend on internal parameters -- so called hyperparameters -- that need to be optimized for best performance. Such optimization poses a burden on machine learning practitioners, requiring expert knowledge, intuition or computationally demanding brute-force parameter searches. We here address the need for more efficient, automated hyperparameter selection with Bayesian optimization. We apply this technique to the kernel ridge regression machine learning method for two different descriptors for the atomic structure of organic molecules, one of which introduces its own set of hyperparameters to the method. We identify optimal hyperparameter configurations and infer entire prediction error landscapes in hyperparameter space, that serve as visual guides for the hyperparameter dependence. We further demonstrate that for an increasing number of hyperparameters, Bayesian optimization becomes significantly more efficient in computational time than an exhaustive grid search -- the current default standard hyperparameter search method -- while delivering an equivalent or even better accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源