论文标题

强大的记忆下限用于学习自然模型

Strong Memory Lower Bounds for Learning Natural Models

论文作者

Brown, Gavin, Bun, Mark, Smith, Adam

论文摘要

我们为解决几个自然学习问题所需的单频道流算算法所需的记忆量提供了较低的界限。在$ \ {0,1 \}^d $中的示例和最佳分类器中的示例的环境中,可以使用$κ$位编码,我们表明,使用近乎最小数量的示例的算法,必须使用$ \ tilde O(κ)$,必须使用$ \ tilde o \ tilde fildeω$ bits。我们的空间界限与问题自然参数化的环境空间的维度相匹配,即使在示例和最终分类器的大小上是二次的。例如,在$ d $ -sparse线性分类器的设置中,$κ=θ(d \ log d)$,我们的空间下限为$ \tildeΩ(d^2)$。我们的边界与流长度$ n $优雅地降级,通常具有$ \tildeΩ\ left(dκ\ cdot \fracκ{n} \ right)$。 $ω(Dκ)$的形式的边界以学习奇偶校验和有限领域定义的其他问题而闻名。在狭窄的样本量范围内适用的边界也以线性回归而闻名。对于最近的学习应用程序中常见的类型的问题,我们的第一个范围是适用于各种输入尺寸的问题。

We give lower bounds on the amount of memory required by one-pass streaming algorithms for solving several natural learning problems. In a setting where examples lie in $\{0,1\}^d$ and the optimal classifier can be encoded using $κ$ bits, we show that algorithms which learn using a near-minimal number of examples, $\tilde O(κ)$, must use $\tilde Ω( dκ)$ bits of space. Our space bounds match the dimension of the ambient space of the problem's natural parametrization, even when it is quadratic in the size of examples and the final classifier. For instance, in the setting of $d$-sparse linear classifiers over degree-2 polynomial features, for which $κ=Θ(d\log d)$, our space lower bound is $\tildeΩ(d^2)$. Our bounds degrade gracefully with the stream length $N$, generally having the form $\tildeΩ\left(dκ\cdot \fracκ{N}\right)$. Bounds of the form $Ω(dκ)$ were known for learning parity and other problems defined over finite fields. Bounds that apply in a narrow range of sample sizes are also known for linear regression. Ours are the first such bounds for problems of the type commonly seen in recent learning applications that apply for a large range of input sizes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源