论文标题
高维线性回归中的缩放最小值最佳性:一种非凸算法正则化方法
Scaled minimax optimality in high-dimensional linear regression: A non-convex algorithmic regularization approach
论文作者
论文摘要
在高维线性回归的经典问题中,快速收敛的问题已经进行了广泛的研究。可以说,实践中最快的程序之一是迭代硬阈值(IHT)。尽管如此,IHT仍强烈依赖于真正的稀疏参数$ s $的知识。在本文中,我们提出了一个新的快速程序,用于在高维线性回归中进行估计。利用估计,支持恢复和优化之间的相互作用,我们同时实现了最佳的统计准确性和快速收敛。我们程序的主要优点是它具有完全自适应,使其比最新的IHT方法更实用。我们的过程比套索的经典算法更快地实现了最佳统计精度。此外,我们为估计和支持恢复建立了敏锐的最佳结果。结果,我们为高维线性回归的高尺寸线性回归提供了一种新的迭代硬阈值算法,该算法缩放为最小值(达到了甲骨文的估计误差,该估计误差(如果可能的话)知道稀疏模式),快速自适应。
The question of fast convergence in the classical problem of high dimensional linear regression has been extensively studied. Arguably, one of the fastest procedures in practice is Iterative Hard Thresholding (IHT). Still, IHT relies strongly on the knowledge of the true sparsity parameter $s$. In this paper, we present a novel fast procedure for estimation in the high dimensional linear regression. Taking advantage of the interplay between estimation, support recovery and optimization we achieve both optimal statistical accuracy and fast convergence. The main advantage of our procedure is that it is fully adaptive, making it more practical than state of the art IHT methods. Our procedure achieves optimal statistical accuracy faster than, for instance, classical algorithms for the Lasso. Moreover, we establish sharp optimal results for both estimation and support recovery. As a consequence, we present a new iterative hard thresholding algorithm for high dimensional linear regression that is scaled minimax optimal (achieves the estimation error of the oracle that knows the sparsity pattern if possible), fast and adaptive.