论文标题
一个规范转换,用于加强本地$ l^p $ -type通用近似属性
A Canonical Transform for Strengthening the Local $L^p$-Type Universal Approximation Property
论文作者
论文摘要
大多数$ l^p $ -Type通用近似定理保证给定的机器学习模型类$ \ Mathscr {f} \ subseteq c(\ Mathbb {r}^d,\ Mathbb {r}^d)$在$ l^p_p_μ(\ Mathbb {\ Mathbb {r} d,$ niite for in Mathbb {在$ \ mathbb {r}^d $上衡量$μ$。不幸的是,这意味着模型的近似质量可以在$ \ mathbb {r}^d $的某个紧凑子集之外迅速退化,因为任何此类措施都集中在$ \ mathbb {r}^d $的某些有限子集上。本文通过引入“升级$ \ Mathscr {f} $的近似属性”的规范转换,提出了对该近似理论问题的通用解决方案。通过$ \ mathscr {f} \ text { - tope} $表示转换的模型类,在$ l^p_ {μ,\ text {strict}}(\ mathbb {r}^d,r}^d,\ mathbbbb {r}^d)中,其elestor $ pop $ fintriate us pop- $ l^p_μ(\ Mathbb {r}^d,\ mathbb {r}^d)$; l^p_μ(\ mathbb {r}^d)$的常规规范拓扑;这里$μ$是任何合适的$σ$ -finite borel Meature $ \ Mathbb {r}^d $上的$μ$。接下来,我们表明,如果$ \ mathscr {f} $是任何分析函数的家庭,那么$ \ mathscr {f} \ text { - tope} $的表达性与$ \ mathscr {f} $之间总是存在严格的“差距”,因为我们发现那个$ \ nathscr {f} $ f} $ {f} $ {f} $ $ l^p_ {μ,\ text {strict}}(\ mathbb {r}^d,\ mathbb {r}^d)$。在一般情况下,其中$ \ mathscr {f} $可能包含非分析函数,我们提供了这些结果的抽象形式,确保始终存在$ \ mathscr {f} \ text {-tope} $的某个功能空间,但$ \ m athscr {f} $不可能,而$ \ mathscr {f} $永远不会,而不是可能。探索了前馈网络,卷积神经网络和多项式基础的应用。
Most $L^p$-type universal approximation theorems guarantee that a given machine learning model class $\mathscr{F}\subseteq C(\mathbb{R}^d,\mathbb{R}^D)$ is dense in $L^p_μ(\mathbb{R}^d,\mathbb{R}^D)$ for any suitable finite Borel measure $μ$ on $\mathbb{R}^d$. Unfortunately, this means that the model's approximation quality can rapidly degenerate outside some compact subset of $\mathbb{R}^d$, as any such measure is largely concentrated on some bounded subset of $\mathbb{R}^d$. This paper proposes a generic solution to this approximation theoretic problem by introducing a canonical transformation which "upgrades $\mathscr{F}$'s approximation property" in the following sense. The transformed model class, denoted by $\mathscr{F}\text{-tope}$, is shown to be dense in $L^p_{μ,\text{strict}}(\mathbb{R}^d,\mathbb{R}^D)$ which is a topological space whose elements are locally $p$-integrable functions and whose topology is much finer than usual norm topology on $L^p_μ(\mathbb{R}^d,\mathbb{R}^D)$; here $μ$ is any suitable $σ$-finite Borel measure $μ$ on $\mathbb{R}^d$. Next, we show that if $\mathscr{F}$ is any family of analytic functions then there is always a strict "gap" between $\mathscr{F}\text{-tope}$'s expressibility and that of $\mathscr{F}$, since we find that $\mathscr{F}$ can never dense in $L^p_{μ,\text{strict}}(\mathbb{R}^d,\mathbb{R}^D)$. In the general case, where $\mathscr{F}$ may contain non-analytic functions, we provide an abstract form of these results guaranteeing that there always exists some function space in which $\mathscr{F}\text{-tope}$ is dense but $\mathscr{F}$ is not, while, the converse is never possible. Applications to feedforward networks, convolutional neural networks, and polynomial bases are explored.