论文标题

基于深神经网络的媒介对向量回归的平均绝对误差

On Mean Absolute Error for Deep Neural Network Based Vector-to-Vector Regression

论文作者

Qi, Jun, Du, Jun, Siniscalchi, Sabato Marco, Ma, Xiaoli, Lee, Chin-Hui

论文摘要

在本文中,我们利用平均绝对误差(MAE)的属性为基于深神经网络(DNN)的载体 - 向量回归的损失函数。这项工作的目的是两个方面:(i)呈现MAE的性能范围,以及(ii)显示MAE的新属性,使其比均值平方误差(MSE)更合适,作为基于DNN的向量向量向量回归的损耗函数。首先,我们表明,可以通过利用已知的Lipschitz连续性特性来确保基于DNN的载体矢量回归的广义上限。接下来,我们在存在加性噪声的情况下得出了一个新的广义上限。最后,与通常用于近似高斯回归错误的常规MSE相反,我们表明MAE可以解释为由拉普拉斯分布建模的错误。进行语音增强实验以证实我们提出的定理,并验证MAE比MSE对基于DNN的回归的优势。

In this paper, we exploit the properties of mean absolute error (MAE) as a loss function for the deep neural network (DNN) based vector-to-vector regression. The goal of this work is two-fold: (i) presenting performance bounds of MAE, and (ii) demonstrating new properties of MAE that make it more appropriate than mean squared error (MSE) as a loss function for DNN based vector-to-vector regression. First, we show that a generalized upper-bound for DNN-based vector- to-vector regression can be ensured by leveraging the known Lipschitz continuity property of MAE. Next, we derive a new generalized upper bound in the presence of additive noise. Finally, in contrast to conventional MSE commonly adopted to approximate Gaussian errors for regression, we show that MAE can be interpreted as an error modeled by Laplacian distribution. Speech enhancement experiments are conducted to corroborate our proposed theorems and validate the performance advantages of MAE over MSE for DNN based regression.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源