Shashanka Venkataramanan, Ewa Kijak, Laurent Amsaleg, and Yannis Avrithis

Mixup methods

Different mixup methods.

Mixup is a powerful data augmentation method that interpolates between two or more examples in the input or feature space and between the corresponding target labels. Many recent mixup methods focus on cutting and pasting two or more objects into one image, which is more about efficient processing than interpolation. However, how to best interpolate images is not well defined. In this sense, mixup has been connected to autoencoders, because often autoencoders “interpolate well”, for instance generating an image that continuously deforms into another.

In this work, we revisit mixup from the deformation perspective and introduce AlignMix, where we geometrically align two images in the feature space. The correspondences allow us to interpolate between two sets of features, while keeping the locations of one set. Interestingly, this retains mostly the geometry or pose of one image and the appearance or texture of the other. We also show that an autoencoder can still improve representation learning under mixup, without the classifier ever seeing decoded images. AlignMix outperforms state-of-the-art mixup methods on five different benchmarks.

Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 19174-19183, 2022-06-24.

View paper
IARAI Authors
Dr Yannis Avrithis
Data Augmentation, Deep Learning, Image Classification, Mixup, Representation Learning


Imprint | Privacy Policy

Stay in the know with developments at IARAI

We can let you know if there’s any

updates from the Institute.
You can later also tailor your news feed to specific research areas or keywords (Privacy)

Log in with your credentials

Forgot your details?

Create Account