论文标题
v $^2 $ L:将视觉和视觉模型用于大规模产品检索
V$^2$L: Leveraging Vision and Vision-language Models into Large-scale Product Retrieval
论文作者
论文摘要
产品检索在电子商务领域非常重要。本文介绍了我们在eBay Eproduct Visual Search Challenge(FGVC9)中介绍的第一个解决方案,该解决方案是来自视觉模型和视觉模型的大约20个模型的集合。尽管模型合奏很普遍,但我们表明,将视觉模型和视觉模型结合起来为其互补性带来了特殊的好处,并且是我们优越性的关键因素。具体而言,对于视觉模型,我们使用了两阶段的训练管道,该管道首先从训练集中提供的粗制标签中学习,然后进行细粒度的自我监督训练,从而产生粗到5的度量度量学习方式。对于视觉语言模型,我们将训练图像的文本描述用作微调图像编码器(功能提取器)的监督信号。通过这些设计,我们的解决方案达到了0.7623 Mar@10,在所有竞争对手中排名第一。该代码可在:\ href {https://github.com/wangwenhao0716/v2l} {v $^2 $ l}中获得。
Product retrieval is of great importance in the ecommerce domain. This paper introduces our 1st-place solution in eBay eProduct Visual Search Challenge (FGVC9), which is featured for an ensemble of about 20 models from vision models and vision-language models. While model ensemble is common, we show that combining the vision models and vision-language models brings particular benefits from their complementarity and is a key factor to our superiority. Specifically, for the vision models, we use a two-stage training pipeline which first learns from the coarse labels provided in the training set and then conducts fine-grained self-supervised training, yielding a coarse-to-fine metric learning manner. For the vision-language models, we use the textual description of the training image as the supervision signals for fine-tuning the image-encoder (feature extractor). With these designs, our solution achieves 0.7623 MAR@10, ranking the first place among all the competitors. The code is available at: \href{https://github.com/WangWenhao0716/V2L}{V$^2$L}.