论文标题
为显着对象检测的深层模型进行基准测试
Benchmarking Deep Models for Salient Object Detection
论文作者
论文摘要
近年来,基于网络的深层方法在显着对象检测(SOD)任务上不断刷新最新的性能。但是,由不同的实施细节引起的绩效差异可能会掩盖此任务中的真正进步。未来的研究需要进行公正的比较。为了满足这种需求,我们构建了一个一般的显着对象检测(SALOD)基准,以在几种代表性的SOD方法之间进行全面比较。具体而言,我们通过使用一致的训练设置来重新实现14个代表性的SOD方法。此外,在我们的基准测试中还设置了两个其他协议,以研究在某些有限条件下现有方法的鲁棒性。在第一个协议中,我们扩大了列车和测试集的物质分布之间的差异,以评估这些SOD方法的鲁棒性。在第二个协议中,我们构建了具有不同尺度的多个火车子集,以验证这些方法是否只能从几个样本中提取判别特征。在上述实验中,我们发现现有的损失函数通常专门从事某些指标,但对其他指标的结果较低。因此,我们提出了一种新颖的边缘感知(EA)损失,该损失通过整合像素和图像级的监督信号来促进深层网络以学习更多歧视性特征。实验证明,与现有损失相比,我们的EA损失报告了更强大的性能。
In recent years, deep network-based methods have continuously refreshed state-of-the-art performance on Salient Object Detection (SOD) task. However, the performance discrepancy caused by different implementation details may conceal the real progress in this task. Making an impartial comparison is required for future researches. To meet this need, we construct a general SALient Object Detection (SALOD) benchmark to conduct a comprehensive comparison among several representative SOD methods. Specifically, we re-implement 14 representative SOD methods by using consistent settings for training. Moreover, two additional protocols are set up in our benchmark to investigate the robustness of existing methods in some limited conditions. In the first protocol, we enlarge the difference between objectness distributions of train and test sets to evaluate the robustness of these SOD methods. In the second protocol, we build multiple train subsets with different scales to validate whether these methods can extract discriminative features from only a few samples. In the above experiments, we find that existing loss functions usually specialized in some metrics but reported inferior results on the others. Therefore, we propose a novel Edge-Aware (EA) loss that promotes deep networks to learn more discriminative features by integrating both pixel- and image-level supervision signals. Experiments prove that our EA loss reports more robust performances compared to existing losses.