论文标题

用于功能数据的在线正规化学习算法

Online Regularized Learning Algorithm for Functional Data

论文作者

Mao, Yuan, Guo, Zheng-Chu

论文摘要

近年来,功能线性模型引起了统计和机器学习的越来越多的关注,目的是恢复坡度函数或其功能预测指标。本文考虑了在繁殖内核希尔伯特空间中的功能线性模型的在线正规化学习算法。分别以多项式腐烂的阶梯尺寸和恒定的阶梯尺寸提供了过度预测误差和估计误差的收敛分析。快速收敛速率可以通过依赖能力分析得出。通过引入一个明确的正则化项,我们在多项式衰减时提升了未注册的在线学习算法的饱和边界,并建立估计误差的快速收敛速率而没有能力假设。但是,对于不注册的在线学习算法的估计误差,具有衰减的阶梯尺寸的估计误差,获得无关的融合率仍然是一个开放的问题。它还表明,预测误差和估计误差(恒定步骤尺寸)的收敛速率与文献中的收敛速率具有竞争力。

In recent years, functional linear models have attracted growing attention in statistics and machine learning, with the aim of recovering the slope function or its functional predictor. This paper considers online regularized learning algorithm for functional linear models in reproducing kernel Hilbert spaces. Convergence analysis of excess prediction error and estimation error are provided with polynomially decaying step-size and constant step-size, respectively. Fast convergence rates can be derived via a capacity dependent analysis. By introducing an explicit regularization term, we uplift the saturation boundary of unregularized online learning algorithms when the step-size decays polynomially, and establish fast convergence rates of estimation error without capacity assumption. However, it remains an open problem to obtain capacity independent convergence rates for the estimation error of the unregularized online learning algorithm with decaying step-size. It also shows that convergence rates of both prediction error and estimation error with constant step-size are competitive with those in the literature.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源