Thomas Adler, Johannes Brandstetter, Michael Widrich, Andreas Mayr, David Kreil, Michael Kopp, Günter Klambauer, and Sepp Hochreiter
Deep learning relies on a large amount of high-quality training data to produce accurate results. The models trained on sufficient data often need to be adapted to new problems where training data are scarce or costly and unrealistic to label. Considerable changes in the distribution of the input or target variables are called domain shifts. Large domain shifts with the new data significantly different from the original data are particularly challenging, as higher-level concepts are not shared between the original and the new domain.
Few-shot learning tackles domain shifts by applying prior knowledge to learn from few training examples. Here, we introduce a new method called Cross-domain Hebbian Ensemble Few-shot learning (CHEF). Deep neural networks contain multiple layers that learn data representations at different levels of abstraction, with each layer tuned to specific features in the input data. CHEF builds on representation fusion, which extracts and unifies relevant information from different levels of abstraction. Representation fusion is implemented using an ensemble of Hebbian learners operating on distinct representation levels in parallel. This allows selecting more general or more specific features depending on similarity between the new and original domains.
We apply CHEF to four cross-domain few-shot learning challenges with domain shifts of different size, measured by the Fréchet Inception Distance (FID). We also test CHEF on two standardized image-based benchmark datasets, miniImageNet and tieredImageNet, and on real-world drug discovery tasks. On small domain shifts, CHEF is competitive with the state-of-the-art methods. On large domain shifts, CHEF significantly outperforms all other methods.