MetaLearning Challenges


MetaLearn 2022

We are grateful to Microsoft and Google for generous cloud unit donations and to 4Paradigm for donating prizes. This project is also supported by ChaLearn and HUMANIA chair of AI grant ANR-19-CHIA-00222 of the Agence Nationale de la Recherche (France). Researchers and students from Université Paris-Saclay, Universiteit Leiden, TU Eindhoven, and 4Paradigm have contributed. The challenge is hosted by Codalab (Université Paris-Saclay).

Machine learning has solved with success many mono-task problems, but at the expense of long wasteful training times. Meta-learning promises to leverage the experience gained on previous tasks to train models faster, with fewer examples, and possibly better performance. Approaches include learning from algorithm evaluations, from task properties (or meta-features), and from prior models.

Following the AutoDL 2019-2020 challenge series and past meta-learning challenges and benchmarks we have organized, including MetaDL@NeurIPS'21, we are organizing three competitions in 2022:

  • 1st Round of Meta-learning from learning curves (accepted to WCCI 2022).

  • 2nd Round of Meta-learning from learning curves (accepted to AutoML-Conf 2022).

  • Cross-domain meta-learning (to be submitted to NeurIPS'22).

We are also planning to organize a workshop at ICML'22 (proposal to be submitted).

Contact us, if you want to join the organizing team.

ENTER THE 2ND LEARNING CURVES COMPETITION, OPENING MAY 16

https://codalab.lisn.upsaclay.fr/competitions/4894

Meta-learning from learning curves challenge (2nd round)

The main goal of this competition is to push the state-of-the-art in meta-learning from learning curves, an important sub-problem in meta-learning. A learning curve evaluates an algorithm's incremental performance improvements, as a function of training time, number of iterations, and/or number of examples. Analysis of past ML challenges revealed that top-ranking methods often involve switching between algorithms during training. We are interested in meta-learning strategies that leverage information on partially trained algorithms, hence reducing the cost of training them to convergence. Furthermore, we want to study the potential benefit of learned policies, as opposed to applying hand-crafted black-box optimization methods. We offer pre-computed learning curves as a function of time, to facilitate benchmarking. Meta-learners must “pay” a cost emulating computational time for revealing their next values. Hence, meta-learners are expected to learn the exploration-exploitation trade-offs between exploiting an already tried good candidate algorithm and exploring new candidate algorithms. The first round of this competition was previously organized for WCCI 2022, please see the results on our website. In this new round, we propose an enlarged and more challenging meta-dataset. Having participated in the first round is NOT a prerequisite. The winners of the first round have open-sourced their code.

Data: We created a meta-dataset from the 30 datasets of the AutoML challenge, by running algorithms with different hyperparameter, from which we obtained learning curves both for the validation sets and the test sets.

Protocol: During a development phase, participants submit agents to be meta-trained and meta-tested on all data, except the test learning curves of each task. During a final test phase, a scoring program computes the agent’s performance on the test learning curves, based on pre-recorded agent suggestions. Furthermore, the ingestion program runs a hold-out procedure: in each split, we hold out 5/30 datasets for meta-testing, and use the rest for meta-training.

Evaluation: The agent is evaluated by the Area under the agents’ Learning Curve (ALC). The values will be averaged over all meta-test datasets and shown on the leaderboards. The final ranking will be made according to the average test ALC.

Meta-learning from learning curves challenge (1st round)

Congratulations to the winners of the 1st round! [TECHNICAL REPORT]

1st Place: Team MoRiHa

2nd Place: Team neptune (500$)

3rd Place: Team AIpert (300$)

4th Place: Team automl-freiburg (200$)



Cross-domain Meta-learning challenge

In this challenge, we focus on end-to-end meta-learning: meta-learning algorithms are exposed to a meta-dataset consisting of several tasks from a number of domains, and must return a learning machine ready to cope with a new learning task, from a new domain.

Data: We are working hard to extend the MetaAlbum benchmark we started putting together last year. It will consist of 30 datasets from 10 domains. They are all image classification datasets, uniformly formatted as 128x128 RGB images, carefully resized with anti-aliasing, cropped manually, and annotated with various meta-data, including super-classes. Ten of those datasets will be revealed to the public in their entirety for practice purposes, ten will be used in the challenge feed-back phase, and ten of them in the challenge final test phase.

Protocol (tentative): We will introduce a novel challenge protocol. We are currently considering several possibilities. In each phase, out of n=10 datasets, perform either:

  • a hold-out validation procedure at the meta-level, by bulk meta-training on k datasets, leaving (n-k) datasets out for meta-testing (k is fixed to a given value); or

  • a continual learning procedure by incrementally meta-training on datasets j=1 to k, and meta-testing on the remaining (n-k) datasets (k varies from 1 to n).

In either case, the procedure would be repeated for multiple dataset orders, each time re-setting the memory of the meta-learning algorithm; average and standard deviation of the results would be computed.

We are also considering to organize 2 tracks:

  • A model-centric track in which participants submit algorithms/agents capable of meta-training and returning a meta-trained learning machine (which can then learn a new task). The "data loader" would be provided and fixed.

  • A data-centric track in which participants submit a data loader supplying training examples in any way they like, based on available training data. The algorithms/agents capable of meta-training would be supplied and fixed.

In any case, we would be meta-testing on n-k datasets, in the following way. Each dataset/task has C classes (C>=20) with Nc=40 examples per class:

For each dataset in meta-test set, split data into training and test sets. Always keep the same nt=20 examples for testing in each class, and

  • vary the number of training samples (shots) ns<=20 (any-shot testing setting) ns=[1, 2, 5, 10, 20].

  • vary the number of classes (ways) nc<=C (any-way testing setting) nc=[2, 4, 8, 16, min(32, C)]

Evaluation:

  • For each domain in the final test phase, performances will be averaged over all experiments and a ranking will be made using this average score.

  • The overall ranking will be made from the average rank of individual domain rankings.

Congratulations to the Neurips'21 MetaDL winners [slides]

About Meta Learning

For a comprehensive overview of Meta-learning, we refer to the following resources:

ChaLearn (USA)

University Paris-Saclay (France)

Codalab, UPSaclay (France)

Leiden University (the Netherlands)

Universidad de la Sabana (Colombia)

4Paradigm (China)