论文标题

OrdinalClip:学习排名提示语言指导的序数回归

OrdinalCLIP: Learning Rank Prompts for Language-Guided Ordinal Regression

论文作者

Li, Wanhua, Huang, Xiaoke, Zhu, Zheng, Tang, Yansong, Li, Xiu, Zhou, Jie, Lu, Jiwen

论文摘要

本文提出了一种以语言驱动的范例来进行序数回归。现有方法通常将每个等级视为一个类别,并采用一组权重来学习这些概念。这些方法易于过度合适,并且通常无法令人满意的性能,因为学习的概念主要来自训练集。最近的大型预训练的视力模型(如剪辑)在各种视觉任务上表现出令人印象深刻的性能。在本文中,我们建议从丰富的语义剪辑潜在空间中学习等级概念。具体而言,我们将此任务重新将其重新制定为具有对比目标的图像语言匹配问题,该目标将标签视为文本,并从每个等级的文本编码中获得语言原型。虽然迅速进行剪辑的工程非常耗时,但我们提出了OrdinalClip,这是一种适合序列回归的可区分提示方法。 OrdinalClip由可学习的上下文令牌和可学习的等级嵌入组成;可学习的等级嵌入是通过明确建模数值连续性来构建的,从而在剪辑空间中产生了有序的,紧凑的语言原型。一旦学习,我们只能保存语言原型并丢弃巨大的语言模型,与线性头部相比,零额外的计算开销。实验结果表明,我们的范式在一般的序数回归任务中实现了竞争性能,并在几乎没有射击和分配转移设置方面取得了改进以进行年龄估计。该代码可在https://github.com/xk-huang/ordinalclip上找到。

This paper presents a language-powered paradigm for ordinal regression. Existing methods usually treat each rank as a category and employ a set of weights to learn these concepts. These methods are easy to overfit and usually attain unsatisfactory performance as the learned concepts are mainly derived from the training set. Recent large pre-trained vision-language models like CLIP have shown impressive performance on various visual tasks. In this paper, we propose to learn the rank concepts from the rich semantic CLIP latent space. Specifically, we reformulate this task as an image-language matching problem with a contrastive objective, which regards labels as text and obtains a language prototype from a text encoder for each rank. While prompt engineering for CLIP is extremely time-consuming, we propose OrdinalCLIP, a differentiable prompting method for adapting CLIP for ordinal regression. OrdinalCLIP consists of learnable context tokens and learnable rank embeddings; The learnable rank embeddings are constructed by explicitly modeling numerical continuity, resulting in well-ordered, compact language prototypes in the CLIP space. Once learned, we can only save the language prototypes and discard the huge language model, resulting in zero additional computational overhead compared with the linear head counterpart. Experimental results show that our paradigm achieves competitive performance in general ordinal regression tasks, and gains improvements in few-shot and distribution shift settings for age estimation. The code is available at https://github.com/xk-huang/OrdinalCLIP.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源