论文标题
基于自适应添加剂分类的深度度量学习损失
Adaptive additive classification-based loss for deep metric learning
论文作者
论文摘要
最近的作品表明,深度度量学习算法可以从另一种输入方式中受益于弱的监督。这种额外的模式可以直接纳入流行的基于三胞胎的损失功能作为距离。同样,最近,已经观察到分类损失和基于代理的公制学习可导致更快的收敛性和更好的检索结果,而无需需要复杂且昂贵的抽样策略。在本文中,我们建议扩展基于分类的深度度量学习的现有自适应余量。我们的扩展程序为每个样品的每个负代理引入一个单独的边缘。这些边缘是在其他方式中从类的距离的训练中计算出来的。我们的结果在Amazon时尚检索数据集以及公共DeepFashion数据集上都设定了新的最新技术。通过基于FastText和Bert的嵌入,可以观察到这一点,以实现其他文本模式。与先前的最新面积相比,通过更快的收敛性和较低的代码复杂性,我们的结果实现了。
Recent works have shown that deep metric learning algorithms can benefit from weak supervision from another input modality. This additional modality can be incorporated directly into the popular triplet-based loss function as distances. Also recently, classification loss and proxy-based metric learning have been observed to lead to faster convergence as well as better retrieval results, all the while without requiring complex and costly sampling strategies. In this paper we propose an extension to the existing adaptive margin for classification-based deep metric learning. Our extension introduces a separate margin for each negative proxy per sample. These margins are computed during training from precomputed distances of the classes in the other modality. Our results set a new state-of-the-art on both on the Amazon fashion retrieval dataset as well as on the public DeepFashion dataset. This was observed with both fastText- and BERT-based embeddings for the additional textual modality. Our results were achieved with faster convergence and lower code complexity than the prior state-of-the-art.