论文标题
O-Vit:正交视觉变压器
O-ViT: Orthogonal Vision Transformer
论文作者
论文摘要
受到自然语言处理中自我发挥机制的巨大成功的启发,Vision Transformer(VIT)创造性地将其应用于图像贴片序列并实现令人难以置信的性能。然而,缩放的点产物自我注意力自我注意使vit的自我发言给原始特征空间的结构带来了规模的歧义。为了解决这个问题,我们提出了一种名为正交视觉变压器(O-VIT)的新型方法,以从几何角度优化VIT。 O-vit限制自我注意区块的参数是在规范性正交的歧管上,可以保持特征空间的几何形状。此外,O-Vit通过采用正交组及其Lie代数之间的汇总映射来实现正交约束和廉价优化开销。我们对图像识别任务进行了比较实验,以证明O-Vit的有效性和实验表明O-Vit可以提高O-Vit的Vit vit vit vit vit vit的效果。3.6%。
Inspired by the tremendous success of the self-attention mechanism in natural language processing, the Vision Transformer (ViT) creatively applies it to image patch sequences and achieves incredible performance. However, the scaled dot-product self-attention of ViT brings about scale ambiguity to the structure of the original feature space. To address this problem, we propose a novel method named Orthogonal Vision Transformer (O-ViT), to optimize ViT from the geometric perspective. O-ViT limits parameters of self-attention blocks to be on the norm-keeping orthogonal manifold, which can keep the geometry of the feature space. Moreover, O-ViT achieves both orthogonal constraints and cheap optimization overhead by adopting a surjective mapping between the orthogonal group and its Lie algebra.We have conducted comparative experiments on image recognition tasks to demonstrate O-ViT's validity and experiments show that O-ViT can boost the performance of ViT by up to 3.6%.