论文标题
关于深层神经网络的后勤和软效果损失的学习属性
On the Learning Property of Logistic and Softmax Losses for Deep Neural Networks
论文作者
论文摘要
深入卷积神经网络(CNN)接受了逻辑和软效果损失的训练,在计算机视觉中的视觉识别任务方面取得了重大进步。当培训数据展示班级失衡时,通常使用班级重新加权版本的Logistic和SoftMax损失版本来提高未加权版本的性能。在本文中,我们通过分析CNN训练CNN收敛到局部最小值后,通过分析必要条件(例如,梯度等于零)来解释这两个损失函数的学习特性。该分析立即为我们提供了理解的解释(1)按班级重新加权机制的定量效应:使用Logistic损失但使用SoftMax损失的多类分类的二进制分类的确定性效果; (2)单标签多级分类的逻辑损失通过一vs.--所有方法的缺点,这是由于学习过程中对负类(例如,非目标类)的预测概率的平均效果所致。凭借分散的逻辑损失的缺点和优势,我们此后提出了一种新颖的重新加权逻辑损失,以实现多级分类。我们简单而有效的配方可以通过专注于学习硬目标类别(目标与非目标类别的目标与非目标类别),从而改善了普通的逻辑损失,并证明具有SoftMax损失具有竞争力。我们在几个基准数据集上评估我们的方法以证明其有效性。
Deep convolutional neural networks (CNNs) trained with logistic and softmax losses have made significant advancement in visual recognition tasks in computer vision. When training data exhibit class imbalances, the class-wise reweighted version of logistic and softmax losses are often used to boost performance of the unweighted version. In this paper, motivated to explain the reweighting mechanism, we explicate the learning property of those two loss functions by analyzing the necessary condition (e.g., gradient equals to zero) after training CNNs to converge to a local minimum. The analysis immediately provides us explanations for understanding (1) quantitative effects of the class-wise reweighting mechanism: deterministic effectiveness for binary classification using logistic loss yet indeterministic for multi-class classification using softmax loss; (2) disadvantage of logistic loss for single-label multi-class classification via one-vs.-all approach, which is due to the averaging effect on predicted probabilities for the negative class (e.g., non-target classes) in the learning process. With the disadvantage and advantage of logistic loss disentangled, we thereafter propose a novel reweighted logistic loss for multi-class classification. Our simple yet effective formulation improves ordinary logistic loss by focusing on learning hard non-target classes (target vs. non-target class in one-vs.-all) and turned out to be competitive with softmax loss. We evaluate our method on several benchmark datasets to demonstrate its effectiveness.