We are grateful to Microsoft and Google for generous cloud unit donations and to 4Paradigm for donating prizes. This project is also supported by ChaLearn and HUMANIA chair of AI grant ANR-19-CHIA-00222 of the Agence Nationale de la Recherche (France). Researchers and students from Université Paris-Saclay, Universiteit Leiden, TU Eindhoven, and 4Paradigm have contributed. The challenge is hosted by Codalab (Université Paris-Saclay).
Machine learning has solved with success many mono-task problems, but at the expense of long wasteful training times. Meta-learning promises to leverage the experience gained on previous tasks to train models faster, with fewer examples, and possibly better performance. Approaches include learning from algorithm evaluations, from task properties (or meta-features), and from prior models.
Following the AutoDL 2019-2020 challenge series and past meta-learning challenges and benchmarks we have organized, including MetaDL@NeurIPS'21, we are organizing three competitions in 2022:
1st Round of Meta-learning from learning curves (accepted to WCCI 2022).
2nd Round of Meta-learning from learning curves (accepted to AutoML-Conf 2022).
Cross-domain meta-learning (to be submitted to NeurIPS'22).
We are also planning to organize a workshop at ICML'22 (proposal to be submitted).
Contact us, if you want to join the organizing team.
Meta-learning from learning curves challenge (2nd round)
Cross-domain Meta-learning challenge
In this challenge, we focus on end-to-end meta-learning: meta-learning algorithms are exposed to a meta-dataset consisting of several tasks from a number of domains, and must return a learning machine ready to cope with a new learning task, from a new domain.
Data: We are working hard to extend the MetaAlbum benchmark we started putting together last year. It will consist of 30 datasets from 10 domains. They are all image classification datasets, uniformly formatted as 128x128 RGB images, carefully resized with anti-aliasing, cropped manually, and annotated with various meta-data, including super-classes. Ten of those datasets will be revealed to the public in their entirety for practice purposes, ten will be used in the challenge feed-back phase, and ten of them in the challenge final test phase.
Protocol (tentative): We will introduce a novel challenge protocol. We are currently considering several possibilities. In each phase, out of n=10 datasets, perform either:
a hold-out validation procedure at the meta-level, by bulk meta-training on k datasets, leaving (n-k) datasets out for meta-testing (k is fixed to a given value); or
a continual learning procedure by incrementally meta-training on datasets j=1 to k, and meta-testing on the remaining (n-k) datasets (k varies from 1 to n).
In either case, the procedure would be repeated for multiple dataset orders, each time re-setting the memory of the meta-learning algorithm; average and standard deviation of the results would be computed.
We are also considering to organize 2 tracks:
A model-centric track in which participants submit algorithms/agents capable of meta-training and returning a meta-trained learning machine (which can then learn a new task). The "data loader" would be provided and fixed.
A data-centric track in which participants submit a data loader supplying training examples in any way they like, based on available training data. The algorithms/agents capable of meta-training would be supplied and fixed.
In any case, we would be meta-testing on n-k datasets, in the following way. Each dataset/task has C classes (C>=20) with Nc=40 examples per class:
For each dataset in meta-test set, split data into training and test sets. Always keep the same nt=20 examples for testing in each class, and
vary the number of training samples (shots) ns<=20 (any-shot testing setting) ns=[1, 2, 5, 10, 20].
vary the number of classes (ways) nc<=C (any-way testing setting) nc=[2, 4, 8, 16, min(32, C)]
For each domain in the final test phase, performances will be averaged over all experiments and a ranking will be made using this average score.
The overall ranking will be made from the average rank of individual domain rankings.
About Meta Learning
For a comprehensive overview of Meta-learning, we refer to the following resources:
Leiden University (the Netherlands)
Universidad de la Sabana (Colombia)