MetaDL-mini @AAAI2021


Following the success of the AutoDL 2019-2020 challenge series (which was part of the competition selection of NeurIPS 2019), we are starting to organize a series of challenges on Meta-Learning.

We co-schedule a workshop on Meta-Learning at AAAI, Februa2021 in Vancouver, Canada. We are happy to announce Chelsea Finn (Stanford University), Oriol Vinyals (Google Deepmind), Lilian Weng (OpenAI) and Richard Zemel (University of Toronto) as our keynote speakers.

Congratulations to the AAAI 2021 MetaDL winners:

Challenge

Paper on the challenge


@InProceedings{pmlr-v140-el-baz21a,

title = {Advances in MetaDL: AAAI 2021 Challenge and Workshop},

author = {El Baz, Adrian and Guyon, Isabelle and Liu, Zhengying and van Rijn, Jan N. and Treguer, Sebastien and Vanschoren, Joaquin},

booktitle = {AAAI Workshop on Meta-Learning and MetaDL Challenge},

pages = {1--16},

year = {2021},

volume = {140},

series = {Proceedings of Machine Learning Research},

publisher = {PMLR},

}

AAAI 2021 Workshop CFP

The performance of many machine learning algorithms depends highly upon the quality and quantity of available data, and (hyper)-parameter settings. In particular, deep learning methods, including convolutional neural networks, are known to be ‘data-hungry,’ and require properly tuned hyper-parameters. Meta-Learning is a way to address both issues. Simple, but effective approaches reported recently include pre-training models on similar datasets. This way, a good model or good hyperparameters can be pre-determined or learned model parameters can be transferred to the new dataset. As such, higher performance can be achieved with the same amount of data, or similar performance with less data (few shot learning). This workshop, with a co-hosted competition, will focus on meta-learning and few shot learning.

The workshop will be held virtually, like all workshops at AAAI. We will organize a one-day workshop, featuring high-profile keynote speakers, a selection of submissions from the workshop, and a panel discussion. All other accepted papers will present their work in a virtual poster session.

Topics

Please note that papers beyond the scope of the competition are also welcome. We welcome all types of submissions that feature Meta-learning and few shot learning, but have a specific focus on the following topics:

  • evaluation protocols and standardized benchmarks

  • generalization of meta-learning techniques across diverse datasets

  • papers that describe submissions to the co-hosted ChaLearn competition

  • traditional meta-learning, including active testing, meta-features and meta-datasets

  • few shot learning techniques, such as MAML, Few Shot Learning and Matching Networks

Submissions

Papers must be formatted in AAAI two-column, camera-ready style. We welcome two types of submissions, regular papers (max. 7 pages, including references) and short papers (max. 4 pages, including references). All accepted papers will be hosted on the website of the workshop. Authors of accepted regular papers can opt-in to the formal PMLR proceedings. Submissions are due December 19th, 23:59 (AoE), 2020.

Keynote Speakers

Chelsea Finn

Stanford University

Meta-Learning for Robustness to the Changing World

Machine learning systems are often designed under the assumption that they will be deployed as a static model in a single static region of the world. However, the world is constantly changing, such that the future no longer looks exactly like the past, and even in relatively static settings, the system may be deployed in new, unseen parts of its world. While such continuous shifts in the data distribution can place major challenges on models acquired in machine learning, the model need not be static either: it can and should adapt. In this talk, I’ll discuss how we can allow deep networks to be robust to such distribution shift via adaptation. I will focus on meta-learning algorithms that enable this adaptation to be fast, first introducing the concept of meta-learning, then briefly overviewing several successful applications of meta-learning ranging from robotics to drug design, and finally discussing several recent works at the frontier of meta-learning research.


Perspective and Frontiers of Meta-Learning

Meta-Learning has gained significant interest from the scientific community, with an increasing set of tools towards rapid learning, adaptation, few-shot learning, and other areas. In this talk, I'll give my perspective on why Meta-Learning may play a role towards natural intelligence and briefly describe related tasks, techniques, advances, and the challenges that remain.


Oriol Vinyals

Google Deepmind

Lilian Weng

OpenAI

Asymmetric self-play for automatic goal discovery in robotic manipulation

We train a single, goal-conditioned policy that can solve many robotic manipulation tasks, including tasks with previously unseen goals and objects. To do so, we rely on asymmetric self-play for goal discovery, where two agents, Alice and Bob, play a game. Alice is asked to propose challenging goals and Bob aims to solve them. We show that this method is able to discover highly diverse and complex goals without any human priors. We further show that Bob can be trained with only sparse rewards, because the interaction between Alice and Bob results in a natural curriculum and Bob can learn from Alice's trajectory when relabeled as a goal-conditioned demonstration. Finally, we show that our method scales, resulting in a single policy that can transfer to many unseen hold-out tasks such as setting a table, stacking blocks, and solving simple puzzles.


Few-shot learning in context

The area of few-shot learning has exploded recently, with novel modeling approaches demonstrating excellent results on a host of challenging tasks. An important question is how to make this paradigm more natural, more closely match human learning scenarios. In this talk I will describe two directions of current work trying to close this gap. In the first, we explore a few-shot scenario in which episodes do not have separate training and testing phases, and instead models are evaluated online while learning novel classes. As in the real world, where spatiotemporal context helps us retrieve learned skills, this online setting also features an underlying context that changes throughout time; object classes are correlated within a context and inferring the correct context can lead to better performance. I will describe a new few-shot learning dataset based on large scale indoor imagery that mimics the visual experience of an agent wandering within a world. I will show how popular few-shot learning approaches can be adapted but fall short in this setting, and propose a new model that outperforms them. The second line of work extends few-shot learning to consider a realistic setting where the similarities between examples can change from episode to episode depending on the task context, which again must be inferred. I will show new benchmark datasets for this flexible few-shot scenario, and show how unsupervised learning obtains more generalizable representations. This work suggests that few-shot learning paradigms can be naturally extended to capture the important role played by context.

Richard Zemel

University of Toronto

Program - February 9, 2021

The Workshop will be hosted on Zoom: https://zoom.us/j/93792387492

All times are in PST.

8:00: opening

8:00-8:45: keynote by Oriol Vinyals: Perspective and Frontiers of Meta-Learning [video link]

8:45-9:00: presentation of the co-hosted competition MetaDL by Adrian El Baz [video link]

9:00-9:30: contributed talk I by Yudong Chen, Chaoyu Guan, Zhikun Wei, Xin Wang, and Wenwu Zhu (Tsinghua University) - 1st place in MetaDL: MetaDelta: A Meta-Learning System for Few-shot Image Classification

9:30-10:00: contributed talk II by Tomáš Chobola, Daniel Vašata and Pavel Kordík (Czech Technical University) - 2nd place in MetaDL: Transfer learning based few-shot classification using optimal transport mapping from preprocessed latent space of backbone neural network

10:00-10:15: break, informal discussions

10:15-11:00: keynote by Chelsea Finn: Meta-Learning for Robustness to the Changing World [video link, first three minutes missing]

11:00-11:30: contributed talk III by by Rui Li (Samsung AI Center), Hyeji Kim (-), Ondrej Bohdal (The University of Edinburgh), Da Li (Samsung), Timothy Hospedales (Edinburgh University), and Nic Lane (University of Cambridge): A Channel Coding Benchmark for Meta-Learning

11:30-12:00 contributed talk IV by Ahmed Ayyad (Technical University Munich),Raden Muaz (KAUST), Yuchen Li (KAUST), Shadi Albarqouni (TU Munich | Helmholtz AI), and Mohamed Elhoseiny (KAUST): Semi-Supervised Few-Shot Learning with Prototypical Random Walks

12:00-14:00: lunch and virtual poster session

14:00-14:45: keynote by Lilian Weng: Asymmetric self-play for automatic goal discovery in robotic manipulation [video link]

14:45-15:15: contributed talk V by Fabio Ferreira, Thomas Nierhoff, and Frank Hutter (University of Freiburg): Learning Synthetic Environments for Reinforcement Learning with Evolution Strategies [video link]

15:15-15:45 contributed talk VI by Rushang V Karia, and Siddharth Srivastava (Arizona State University): Learning Generalized Relational Heuristic Networks for Model-Agnostic Planning [video link]

15:45-16:15: break, informal discussions

16:15 -17:00: keynote by Richard Zemel: Few-shot learning in context [video link]

17:00-17:30: panel discussion. Panelists: Chelsea Finn (Stanford University), Frank Hutter (University of Freiburg), Lars Kotthoff (University of Wyoming) and Richard Zemel (University of Toronto)

17:30-18:00: general discussion and closing remarks


Accepted Papers / Poster Session

The poster session will be hosted in Zoom Break-out rooms. Each poster will be allocated a time-slot and a breakout room. Below are the numbers of the breakout rooms allocated, and the time slots. In order to have full control over the break-out room, you are kindly advised to run the latest version of Zoom (at least version 5.4.0 or higher).

Poster Session 12:00-13:00

  1. Henry Kvinge, Zachary New (Pacific Northwest National Lab), Nico Courts (University of Washington), Jung Lee, Lauren Phillips , Courtney Corley, Aaron R Tuor, Andrew Avila, Nathan Hodas (Pacific Northwest National Lab): Fuzzy Simplicial Networks: A Topology-Inspired Model to Improve Task Generalization in Few-shot Learning

  2. Nicholas I Kuo (Australian National University), Mehrtash Harandi (Monash University), Nicolas Fourrier (Leonard de Vinci Pole Universitaire), Christian Walder (CSIRO and the Australian National University), Gabriela Ferarro (Data61 CSIRO); Hanna Suominen (Australian National University and Data61): Learning to Continually Learn Rapidly from Few and Noisy Data

  3. Yudong Chen, Chaoyu Guan, Zhikun Wei, Xin Wang, Wenwu Zhu (Tsinghua University): MetaDelta: A Meta-Learning System for Few-shot Image Classification

  4. Anay Majee (Intel Technologies india), Kshitij Agrawal (Intel Corporation), Anbumani Subramanian (Intel): Few-Shot Learning for Road Object Detection

  5. Aroof Aimen, Sahil Sidheekh, Vineet Madan, and Narayanan C Krishnan (Indian Institute of Technology Ropar): Stress Testing of Meta-learning Approaches for Few-shot Learning

  6. Rui Manuel Leite and Pavel Brazdil (LIAAD-INESC Porto L.A./Faculty of Economics, University of Porto): Exploiting Performance-Based Similarity between Datasets in Metalearning

  7. Rui Li (Samsung AI Center), Hyeji Kim (-), Ondrej Bohdal (The University of Edinburgh), Da Li (Samsung), Timothy Hospedales (Edinburgh University), and Nic Lane (University of Cambridge): A Channel Coding Benchmark for Meta-Learning

  8. Fabio Ferreira, Thomas Nierhoff, and Frank Hutter (University of Freiburg): Learning Synthetic Environments for Reinforcement Learning with Evolution Strategies

Poster Session 13:00-14:00

  1. Tomáš Chobola, Daniel Vašata and Pavel Kordík (Czech Technical University): Transfer learning based few-shot classification using optimal transport mapping from preprocessed latent space of backbone neural network

  2. Eric Mitchell, Chelsea Finn, and Christopher D. Manning (Stanford University): Challenges of Acquiring Compositional Inductive Biases via Meta-Learning

  3. Rushang V Karia, and Siddharth Srivastava (Arizona State University): Learning Generalized Relational Heuristic Networks for Model-Agnostic Planning

  4. Ahmed Ayyad (Technical University Munich),Raden Muaz (KAUST), Yuchen Li (KAUST), Shadi Albarqouni (TU Munich | Helmholtz AI), and Mohamed Elhoseiny (KAUST): Semi-Supervised Few-Shot Learning with Prototypical Random Walks

  5. Zhengying Liu (Inria), and Isabelle Guyon (UPSud, INRIA, University Paris-saclay and ChaLearn): Asymptotic Analysis of Meta-learning as a Recommendation Problem

  6. Samuel Gabriel Müller, André Biedenkapp, Frank Hutter (University of Freiburg): In-Loop Meta-Learning with Gradient Alignment Reward

  7. Haniye Kashgarani, and Lars Kotthoff (University of Wyoming): Is Algorithm Selection Worth It? Comparing Selecting Single Algorithms and Parallel Execution

  8. Mikhail Mekhedkin-Meskhi, Ricardo Vilalta (University of Houston), Adriano Rivolli, Rafael Gomes Mantovani (Federal Technology University of Paraná): Learning Abstract Task Representations

About Meta Learning

For a comprehensive overview of Meta-learning, we refer to the following resources:

Organization

This challenge would not have been possible without the help of many people.

Main organizers:

  • Adrian El Baz (U. Paris-Saclay, France)

  • Isabelle Guyon (U. Paris-SaclayINRIA, France and ChaLearn, USA)

  • Zhengying Liu (U. Paris-Saclay, France)

  • Jan N. van Rijn (Leiden University, Netherlands)

  • Sebastien Treguer (U. Paris-Saclay, France)

  • Joaquin Vanschoren (Eindhoven University, the Netherlands)


Other contributors to the organization, starting kit, and datasets, include:

  • Salisu Mamman Abdulrahman (KUST, Wudil, Nigeria)

  • Stephane Ayache (AMU, France)

  • Kristin Bennett (RPI, New York, USA)

  • Pavel Brazdil (Univ. of Porto/INESC TEC, Portugal)

  • André C. P. L. F. de Carvalho (Universidade de São Paulo)

  • Katharina Eggensperger (University of Freiburg, Germany)

  • André Elisseeff (Google Zurich, Switzerland)

  • Hugo Jair Escalante (IANOE, Mexico and ChaLearn, USA)

  • Sergio Escalera (U. Barcelona, Spain and ChaLearn, USA)

  • Matthias Feurer (University of Freiburg, Germany)

  • Kemilly Dearo Garcia (Pega, Netherlands)

  • Bram van Ginneken (Radboud U. Nijmegen, The Netherlands)

  • Alexandre Gramfort (U. Paris-Saclay; INRIA, France)

  • Rafael Gomes Mantovani (Federal Technology University - Paraná)

  • Andreas Mueller (Microsoft, USA)

  • Bernhard Pfahringer (University of Waikato)

  • Florian Pfisterer (LMU Munich, Germany)

  • Fabio Pinto (Feedzai, Portugal)

  • Aske Plaat (Leiden University, the Netherlands)

  • Marc Schoenauer (U. Paris-Saclay, INRIA, France)

  • Carlos Soares (University of Porto, Portugal)

  • Lisheng Sun (U. Paris-Saclay; UPSud, France)

  • Ricardo Vilalta (University of Houston, USA)

  • Wei-Wei Tu (4paradigm, China)

  • Zhen Xu (Ecole Polytechnique and U. Paris-Saclay; INRIA, France)


The challenge is running on the Codalab platform, administered by Université Paris-Saclay and maintained by CKCollab LLC, with primary developers:

  • Eric Carmichael (CKCollab, USA)

  • Tyler Thomas (CKCollab, USA)

ChaLearn is the challenge organization coordinator. Microsoft is the primary sponsor of the challenge. 4Paradigm donated prizes, datasets, and contributed to the protocol, baselines methods and beta-testing. Other institutions of the co-organizers provided in-kind contributions, including datasets, data formatting, baseline methods, and beta-testing.

Contact the organizers.