Following the success of the AutoDL 2019-2020 challenge series (which was part of the competition selection of NeurIPS 2019), we are starting to organize a series of challenges on Meta-Learning.
We co-schedule a workshop on Meta-Learning at AAAI, Februa2021 in Vancouver, Canada. We are happy to announce Chelsea Finn (Stanford University), Oriol Vinyals (Google Deepmind), Lilian Weng (OpenAI) and Richard Zemel (University of Toronto) as our keynote speakers.
Deadline workshop papers: December 1st
To ramp up difficulty, we are running a series of milestone challenges of increasing difficulty.
MetaDL Challenge 2020 - Focuses on Image Classification tasks
Challenge 2021 - Focuses on all sorts of Classification tasks
Challenge 2022 - To be announced
AAAI 2021 Workshop CFP
The performance of many machine learning algorithms depends highly upon the quality and quantity of available data, and (hyper)-parameter settings. In particular, deep learning methods, including convolutional neural networks, are known to be ‘data-hungry,’ and require properly tuned hyper-parameters. Meta-Learning is a way to address both issues. Simple, but effective approaches reported recently include pre-training models on similar datasets. This way, a good model or good hyperparameters can be pre-determined or learned model parameters can be transferred to the new dataset. As such, higher performance can be achieved with the same amount of data, or similar performance with less data (few shot learning). This workshop, with a co-hosted competition, will focus on meta-learning and few shot learning.
The workshop will be held virtually, like all workshops at AAAI. We will organize a one-day workshop, featuring high-profile keynote speakers, a selection of submissions from the workshop, and a panel discussion. All other accepted papers will present their work in a virtual poster session.
Please note that papers beyond the scope of the competition are also welcome. We welcome all types of submissions that feature Meta-learning and few shot learning, but have a specific focus on the following topics:
evaluation protocols and standardized benchmarks
generalization of meta-learning techniques across diverse datasets
papers that describe submissions to the co-hosted ChaLearn competition
traditional meta-learning, including active testing, meta-features and meta-datasets
few shot learning techniques, such as MAML, Few Shot Learning and Matching Networks
Papers must be formatted in AAAI two-column, camera-ready style. We welcome two types of submissions, regular papers (max. 7 pages, including references) and short papers (max. 4 pages, including references). All accepted papers will be hosted on the website of the workshop. Authors of accepted regular papers can opt-in to the formal PMLR proceedings. Submissions are due December 19th, 23:59 (AoE), 2020.
Submission site: https://cmt3.research.microsoft.com/METALEARNCC2021
Meta-Learning for Robustness to the Changing World
Machine learning systems are often designed under the assumption that they will be deployed as a static model in a single static region of the world. However, the world is constantly changing, such that the future no longer looks exactly like the past, and even in relatively static settings, the system may be deployed in new, unseen parts of its world. While such continuous shifts in the data distribution can place major challenges on models acquired in machine learning, the model need not be static either: it can and should adapt. In this talk, I’ll discuss how we can allow deep networks to be robust to such distribution shift via adaptation. I will focus on meta-learning algorithms that enable this adaptation to be fast, first introducing the concept of meta-learning, then briefly overviewing several successful applications of meta-learning ranging from robotics to drug design, and finally discussing several recent works at the frontier of meta-learning research.
Perspective and Frontiers of Meta-Learning
Meta-Learning has gained significant interest from the scientific community, with an increasing set of tools towards rapid learning, adaptation, few-shot learning, and other areas. In this talk, I'll give my perspective on why Meta-Learning may play a role towards natural intelligence and briefly describe related tasks, techniques, advances, and the challenges that remain.
Asymmetric self-play for automatic goal discovery in robotic manipulation
We train a single, goal-conditioned policy that can solve many robotic manipulation tasks, including tasks with previously unseen goals and objects. To do so, we rely on asymmetric self-play for goal discovery, where two agents, Alice and Bob, play a game. Alice is asked to propose challenging goals and Bob aims to solve them. We show that this method is able to discover highly diverse and complex goals without any human priors. We further show that Bob can be trained with only sparse rewards, because the interaction between Alice and Bob results in a natural curriculum and Bob can learn from Alice's trajectory when relabeled as a goal-conditioned demonstration. Finally, we show that our method scales, resulting in a single policy that can transfer to many unseen hold-out tasks such as setting a table, stacking blocks, and solving simple puzzles.
Few-shot learning in context
The area of few-shot learning has exploded recently, with novel modeling approaches demonstrating excellent results on a host of challenging tasks. An important question is how to make this paradigm more natural, more closely match human learning scenarios. In this talk I will describe two directions of current work trying to close this gap. In the first, we explore a few-shot scenario in which episodes do not have separate training and testing phases, and instead models are evaluated online while learning novel classes. As in the real world, where spatiotemporal context helps us retrieve learned skills, this online setting also features an underlying context that changes throughout time; object classes are correlated within a context and inferring the correct context can lead to better performance. I will describe a new few-shot learning dataset based on large scale indoor imagery that mimics the visual experience of an agent wandering within a world. I will show how popular few-shot learning approaches can be adapted but fall short in this setting, and propose a new model that outperforms them. The second line of work extends few-shot learning to consider a realistic setting where the similarities between examples can change from episode to episode depending on the task context, which again must be inferred. I will show new benchmark datasets for this flexible few-shot scenario, and show how unsupervised learning obtains more generalizable representations. This work suggests that few-shot learning paradigms can be naturally extended to capture the important role played by context.
University of Toronto
Program - February 9, 2021
The Workshop will be hosted on Zoom: https://zoom.us/j/93792387492
All times are in PST.
8:00-8:45: keynote by Oriol Vinyals: Perspective and Frontiers of Meta-Learning
8:45-9:15: contributed talk I by Yudong Chen, Chaoyu Guan, Zhikun Wei, Xin Wang, and Wenwu Zhu (Tsinghua University): Meta_Learner: A Meta-Learning System for AAAI 2021 MetaDL Challenge
9:15-9:45: contributed talk II by Tomáš Chobola, Daniel Vašata and Pavel Kordík (Czech Technical University): Transfer learning based few-shot classification using optimal transport mapping from preprocessed latent space of backbone neural network
9:45-10:15: break, informal discussions
10:15-11:00: keynote by Chelsea Finn: Meta-Learning for Robustness to the Changing World
11:00-11:30: contributed talk III by by Rui Li (Samsung AI Center), Hyeji Kim (-), Ondrej Bohdal (The University of Edinburgh), Da Li (Samsung), Timothy Hospedales (Edinburgh University), and Nic Lane (University of Cambridge): A Channel Coding Benchmark for Meta-Learning
11:30-12:00 contributed talk IV by Rushang V Karia, and Siddharth Srivastava (Arizona State University): Learning Generalized Relational Heuristic Networks for Model-Agnostic Planning
12:00-14:00: lunch and virtual poster session
14:00-14:45: keynote by Lilian Weng: Asymmetric self-play for automatic goal discovery in robotic manipulation
14:45-15:15: contributed talk V by Fabio Ferreira, Thomas Nierhoff, and Frank Hutter (University of Freiburg): Learning Synthetic Environments for Reinforcement Learning with Evolution Strategies
15:15-15:45 contributed talk VI by Ahmed Ayyad (Technical University Munich),Raden Muaz (KAUST), Yuchen Li (KAUST), Shadi Albarqouni (TU Munich | Helmholtz AI), and Mohamed Elhoseiny (KAUST): Semi-Supervised Few-Shot Learning with Prototypical Random Walks
15:45-16:15: break, informal discussions
16:15 -17:00: keynote by Richard Zemel: Few-shot learning in context
17:00-17:30: panel discussion. Panelists: Chelsea Finn (Stanford University), Frank Hutter (University of Freiburg), Lars Kotthoff (University of Wyoming) and Richard Zemel (University of Toronto)
17:30-18:00: general discussion and closing remarks
Tentative schedule for the Meta-Learning 2020 Challenge. For full information, see the CodaLab challenge.
September 2020: Release of starter kit and public datasets for 2020 Challenge
December 5th: Submission Deadline for Challenge
December 19th, 23:59 AoE: Final submission Deadline for Workshop (no further extensions will be granted)
January 16th, notification of accept / reject
January 24th, 23:59, camera ready copy deadline
February 9th, 2021: Workshop on Meta-Learning and the Meta-Learning 2020 Challenge
About Meta Learning
For a comprehensive overview of Meta-learning, we refer to the following resources:
This challenge would not have been possible without the help of many people.
Adrian El Baz (U. Paris-Saclay, France)
Isabelle Guyon (U. Paris-SaclayINRIA, France and ChaLearn, USA)
Zhengying Liu (U. Paris-Saclay, France)
Jan N. van Rijn (Leiden University, Netherlands)
Sebastien Treguer (U. Paris-Saclay, France)
Joaquin Vanschoren (Eindhoven University, the Netherlands)
Other contributors to the organization, starting kit, and datasets, include:
Salisu Mamman Abdulrahman (KUST, Wudil, Nigeria)
Stephane Ayache (AMU, France)
Kristin Bennett (RPI, New York, USA)
Pavel Brazdil (Univ. of Porto/INESC TEC, Portugal)
André C. P. L. F. de Carvalho (Universidade de São Paulo)
Katharina Eggensperger (University of Freiburg, Germany)
André Elisseeff (Google Zurich, Switzerland)
Hugo Jair Escalante (IANOE, Mexico and ChaLearn, USA)
Sergio Escalera (U. Barcelona, Spain and ChaLearn, USA)
Matthias Feurer (University of Freiburg, Germany)
Kemilly Dearo Garcia (Pega, Netherlands)
Bram van Ginneken (Radboud U. Nijmegen, The Netherlands)
Alexandre Gramfort (U. Paris-Saclay; INRIA, France)
Rafael Gomes Mantovani (Federal Technology University - Paraná)
Andreas Mueller (Microsoft, USA)
Bernhard Pfahringer (University of Waikato)
Florian Pfisterer (LMU Munich, Germany)
Fabio Pinto (Feedzai, Portugal)
Aske Plaat (Leiden University, the Netherlands)
Marc Schoenauer (U. Paris-Saclay, INRIA, France)
Carlos Soares (University of Porto, Portugal)
Lisheng Sun (U. Paris-Saclay; UPSud, France)
Ricardo Vilalta (University of Houston, USA)
Wei-Wei Tu (4paradigm, China)
Zhen Xu (Ecole Polytechnique and U. Paris-Saclay; INRIA, France)
Eric Carmichael (CKCollab, USA)
Tyler Thomas (CKCollab, USA)
ChaLearn is the challenge organization coordinator. Google is the primary sponsor of the challenge and helped defining the tasks, protocol, and data formats. 4Paradigm donated prizes, datasets, and contributed to the protocol, baselines methods and beta-testing. Other institutions of the co-organizers provided in-kind contributions, including datasets, data formatting, baseline methods, and beta-testing.