Thomas Adler, Johannes Brandstetter, Michael Widrich, Andreas Mayr, David Kreil, Michael Kopp, Günter Klambauer, and Sepp Hochreiter

working principle of CHEF

Working principle of CHEF. An ensemble of Hebbian learners is applied to the upper layers of a trained neural network. Distilling information from different layers of abstraction is called representation fusion. Each Hebbian learner is iteratively optimized and the results are combined.

Deep learning relies on a large amount of high-quality training data to produce accurate results. The models trained on sufficient data often need to be adapted to new problems where training data are scarce or costly and unrealistic to label. Considerable changes in the distribution of the input or target variables are called domain shifts. Large domain shifts with the new data significantly different from the original data are particularly challenging, as higher-level concepts are not shared between the original and the new domain.

Few-shot learning tackles domain shifts by applying prior knowledge to learn from few training examples. Here, we introduce a new method called Cross-domain Hebbian Ensemble Few-shot learning (CHEF). Deep neural networks contain multiple layers that learn data representations at different levels of abstraction, with each layer tuned to specific features in the input data. CHEF builds on representation fusion, which extracts and unifies relevant information from different levels of abstraction. Representation fusion is implemented using an ensemble of Hebbian learners operating on distinct representation levels in parallel. This allows selecting more general or more specific features depending on similarity between the new and original domains.

We apply CHEF to four cross-domain few-shot learning challenges with domain shifts of different size, measured by the Fréchet Inception Distance (FID). We also test CHEF on two standardized image-based benchmark datasets, miniImageNet and tieredImageNet, and on real-world drug discovery tasks. On small domain shifts, CHEF is competitive with the state-of-the-art methods. On large domain shifts, CHEF significantly outperforms all other methods.

Implementation of CHEF is available on GitHub. Learn more about CHEF from the blog post.

arXiv:2010.06498, 2020-10-13

View paper
IARAI Authors
Dr Sepp Hochreiter, Dr David Kreil, Dr Michael Kopp​
A Theory of AI, Meta-learning
Deep Learning, Domain Shift, Few-Shot Learning, Representation Fusion


Imprint | Privacy Policy

Stay in the know with developments at IARAI

We can let you know if there’s any

updates from the Institute.
You can later also tailor your news feed to specific research areas or keywords (Privacy)

Log in with your credentials

Forgot your details?

Create Account