Chull Hwan Song, Jooyoung Yoon, Shunghyun Choi, and Yannis Avrithis

Overview of the proposed method

An overview of the proposed method.

Vision Transformers have achieved remarkable progress in vision tasks such as image classification and detection. However, in instance-level image retrieval, Transformers have not yet shown good performance compared to convolutional networks. We propose a number of improvements that make Transformers outperform the state of the art for the first time. (1) We show that a hybrid architecture is more effective than plain Transformers, by a large margin. (2) We introduce two branches collecting global (classification token) and local (patch tokens) information, from which we form a global image representation. (3) In each branch, we collect multi-layer features from the Transformer encoder, corresponding to skip connections across distant layers. (4) We enhance locality of interactions at the deeper layers of the encoder, which is the relative weakness of Vision Transformers. We train our model on all commonly used training sets and, for the first time, we make fair comparisons separately per training set. In all cases, we outperform previous models based on global representation. Public code is available on GitHub.

arXiv:2210.11909, 2022-10-21.

Download
View paper
IARAI Authors
Dr Yannis Avrithis
Research
Algorithms
Keywords
Attention Mechanism, Computer Vision, Image Recognition, Image Retrieval, Vision Transformer

©2023 IARAI - INSTITUTE OF ADVANCED RESEARCH IN ARTIFICIAL INTELLIGENCE

Imprint | Privacy Policy

Stay in the know with developments at IARAI

We can let you know if there’s any

updates from the Institute.
You can later also tailor your news feed to specific research areas or keywords (Privacy)
Loading

Log in with your credentials

Forgot your details?

Create Account