论文标题
经验$ \ ell_2的错误界限
Error Bound of Empirical $\ell_2$ Risk Minimization for Noisy Standard and Generalized Phase Retrieval Problems
论文作者
论文摘要
In this paper, we study the estimation performance of empirical $\ell_2$ risk minimization (ERM) in noisy (standard) phase retrieval (NPR) given by $y_k = |α_k^*x_0|^2+η_k$, or noisy generalized phase retrieval (NGPR) formulated as $y_k = x_0^*A_kx_0 + η_k$, where $ x_0 \ in \ mathbb {k}^d $是所需的信号,$ n $是样本大小,$η=(η_1,...,...,η_n)^\ top $是噪声向量。我们在不同的噪声模式下建立了新的错误界限,我们的证明对于$ \ mathbb {k} = \ mathbb {r} $和$ \ mathbb {k} = \ mathbb {c} $有效。在NPR下,在任意噪声向量$η$下,我们得出了一个新的错误$ o \ big(\ |η\ | _ \ iftty \ sqrt \ sqrt {\ frac {\ frac {d} {n}}}} + \ frac {| \ frac {| \ mathbf {| \ mathbf {1}在许多情况下在NGPR中,我们显示$ o \ big(\ |η\ | \ frac {\ sqrt {d}}} {n} \ big)$ for nutary $η$。在这两个问题上,任意噪声的范围立即引起$ \ tilde {o}(\ sqrt {\ frac {d} {n}}})$ sub-Gaussian或sub-Enpertential随机噪声,具有一些常规但内端的假设(e.g.,独立或零元状态)。此外,我们首次尝试在假定$ l $ -th时刻的重尾随机噪声下进行ERM。为了在偏见和差异之间取消权衡取舍,我们截断了响应并提出了相应的鲁棒ERM估计器,该估计量具有保证$ \ tilde {o} \ big(\ big [\ sqrt {\ frac {\ frac {d} {d} {n}} {n}} {n}}} {n}}} {n}} {n}} {big]^1-1/l} ng所有错误界限都直接扩展到等级 - $ r $矩阵恢复的更一般问题,这些结果得出的结论是,NGPR中的全级框架$ \ {a_k \} _ {a_k \} _ {k = 1}^n $比等级-1帧$ \ \ \ \ \ \ {α_kα_k^$ __________________________提出了广泛的实验结果,以说明我们的理论发现。
In this paper, we study the estimation performance of empirical $\ell_2$ risk minimization (ERM) in noisy (standard) phase retrieval (NPR) given by $y_k = |α_k^*x_0|^2+η_k$, or noisy generalized phase retrieval (NGPR) formulated as $y_k = x_0^*A_kx_0 + η_k$, where $x_0\in\mathbb{K}^d$ is the desired signal, $n$ is the sample size, $η= (η_1,...,η_n)^\top$ is the noise vector. We establish new error bounds under different noise patterns, and our proofs are valid for both $\mathbb{K}=\mathbb{R}$ and $\mathbb{K}=\mathbb{C}$. In NPR under arbitrary noise vector $η$, we derive a new error bound $O\big(\|η\|_\infty\sqrt{\frac{d}{n}} + \frac{|\mathbf{1}^\topη|}{n}\big)$, which is tighter than the currently known one $O\big(\frac{\|η\|}{\sqrt{n}}\big)$ in many cases. In NGPR, we show $O\big(\|η\|\frac{\sqrt{d}}{n}\big)$ for arbitrary $η$. In both problems, the bounds for arbitrary noise immediately give rise to $\tilde{O}(\sqrt{\frac{d}{n}})$ for sub-Gaussian or sub-exponential random noise, with some conventional but inessential assumptions (e.g., independent or zero-mean condition) removed or weakened. In addition, we make a first attempt to ERM under heavy-tailed random noise assumed to have bounded $l$-th moment. To achieve a trade-off between bias and variance, we truncate the responses and propose a corresponding robust ERM estimator, which is shown to possess the guarantee $\tilde{O}\big(\big[\sqrt{\frac{d}{n}}\big]^{1-1/l}\big)$ in both NPR, NGPR. All the error bounds straightforwardly extend to the more general problems of rank-$r$ matrix recovery, and these results deliver a conclusion that the full-rank frame $\{A_k\}_{k=1}^n$ in NGPR is more robust to biased noise than the rank-1 frame $\{α_kα_k^*\}_{k=1}^n$ in NPR. Extensive experimental results are presented to illustrate our theoretical findings.