This challenge aims to find meta-learning approaches effective in the few-shot learning setting for image classification tasks. The domains are very diverse (medicine, biology, chemistry, manufacturing, object recognition, people postures, etc.), and include images at different scales. The approaches staken hould be time efficient, that is, meta-learning procedures should not exceed a specific time budget. More details are available in the Evaluation tab.
The competition is divided into 2 phases: Feedback and Final. This website is the Feedback phase portal. In Feedback Phase, the participant can develop their own approach, make submissions and check out the performance on the leaderboard for each dataset. In Final Phase, the last valid submission during Feedback Phase is blind-tested on 5 unseen meta-datasets. Results in Final Phase are used for determining the winners. The participants having outperfomed the baseline will be invited to enter the final phase.
The participants need to submit their code through this platform. The submitted code must respect a specific API that is detailled in the notebook tutorial (see Data & Starting kit tab). Following this API, the participants will first "meta-train" a meta-learner resulting in a learner. This learner can then be trained on an unseen meta-test tasks and be evaluated on unlabelled examples of the same task. A participant's submission will be evaluated by the capacity of this learner to quickly adapt to new unseen tasks. Please refer to the Evaluation tab for more details.
Prizes: $500 1st place, $300 2nd place, $200 3rd place
Sponsors: ChaLearn, Microsoft, Google, and 4Paradigm
Starting kit: Dedicated GitHub repository.
Questions: Contact the organizers.
This webstite is dedicated to the Feedback phase.
This challenge follows a previous chellenge held in conjunction with AAAI 2021. We call Public Phase the phase preceding the start of the Feedback Phase, during which the participants can practice on small datasets from the previous challenge.
For both Feedback Phase and Public Phase, the performance of a meta-learning algorithm is measured through the evaluation of 600 episodes at meta-test time. The participant needs to implement a MyMetaLearner
class that can meta-fit a meta-train set and produce a Learner
object, which in turn can fit any support set (a.k.a training set), generated from a meta-test set, and produce a Predictor
. The accuracy of these predictors on each query set (or test set) is then averaged to produce a final score. In Feedback Phase, this score is used to form a leaderboard. In Final Phase, this score is used as the criterion for deciding winners (and a leadberboard will also be released). One important aspect of the challenge is that submissions must produce a Learner
within 2 hours for each meta-dataset (10 hours in total for all 5 meta-datasets). Each submission has access to 4 GPU Tesla M60 for this amount of time.
Note that the participant is responsible for guaranteeing that their submission terminates within the time limit. Otherwise the submission will still consume all the time limit quota of the associated submission; a
SoftTimeLimitExceeded
error will be thrown and the submission will get negative scores.
For each meta-dataset, we use the 5-way 5-shots few-shot learning setting at meta-test time:
Support set: 5 classes and 5 examples per class (labeled examples)
Query set: 5 classes and 19 examples per class (unlabeled examples)
After we compute the accuracy for all 5 meta-datasets, the overall ranking is used as the final score for evaluation and will be used in the learderboard. It is computed by averaging the ranks (among all participants) of accuracies obtained on the 5 meta-datasets.
This challenge would not have been possible without the help of many people:
Adrian El Baz (Université Paris-Saclay, France)
Isabelle Guyon (Université Paris-Saclay; INRIA, France and ChaLearn, USA)
Jennifer He (4Paradigm, China)
Mike Huisman (Leiden University, the Netherlands)
Zhengying Liu (Université Paris-Saclay, France)
Yui Man Lui (University of Central Missouri)
Felix Mohr (Universidad de la Sabana)
Jan N. van Rijn (Leiden University, Netherlands)
Sebastien Treguer (Université Paris-Saclay, France)
Wei-Wei Tu (4Paradigm, China)
Jun Wan (Chinese Academy of Sciences, China)
Lisheng Sun (Université Paris-Saclay, France)
Haozhe Sun (Université Paris-Saclay; LISN, France)
Phan Anh Vu (Université Paris-Saclay; LISN, France)
Ihsan Ullah (Université Paris-Saclay; LISN, France)
Joaquin Vanschoren (Eindhoven University, the Netherlands)
Jun Wan (Chinese Academy of Science, China)
Benjia Zhou (Chinese Academy of Sciences, China)
The challenge is running on the Codalab platform, administered by Université Paris-Saclay and maintained by:
Tyler Thomas (CKCollab, USA)
Loic Sarrazin (Artelys, France)
Anne-Catherine Letournel (UPSaclay, France)
Adrien Pavao (UPSaclay, France)
ChaLearn is the challenge organization coordinator. Microsoft and Google are the primary sponsors of the challenge. 4Paradigm donated prizes, datasets, and contributed to the protocol, baselines methods and beta-testing. Other institutions of the co-organizers provided in-kind contributions, including datasets, data formatting, baseline methods, and beta-testing.
During the challenge, each phase (i.e. Feedback Phase or Final Phase) has 5 meta-datasets, each from a different image domain. Each meta-dataset is used to generate few-shot learning tasks via the same API as the Meta-dataset repo. For more details, please refer to the provided starting kit which can be downloaded/cloned in the following.
This starting kit provides a walkthrough to making a valid submission with an example meta-dataset: Omniglot.
For a quick submission, you can directly download this mysubmission.zip file and submit it by clicking on the button "Upload a Submission". This file can also be generated by going through the tutorial notobook in the starting kit.
All images in the challenge are of shape [128,128,3].
Feedback data: The data is kept hidden but you can get quick feedback on the leaderboard.
Final data: This phase will start at the very end of the challenge. The data is kept hidden.
Start: Aug. 2, 2021, midnight
Description: Please make submissions by clicking on 'Upload a Submission' button. Then you can view the submission results of your algorithm on each meta-dataset (Meta-dataset 1, Meta-dataset 2, etc) in the corresponding Results Tab.
Color | Label | Description | Start |
---|---|---|---|
Meta-dataset 1 | This tab contains submission results of your algorithm on Meta-dataset 1. | Aug. 2, 2021, midnight | |
Meta-dataset 2 | This tab contains submission results of your algorithm on Meta-dataset 2. | Aug. 2, 2021, midnight | |
Meta-dataset 3 | This tab contains submission results of your algorithm on Meta-dataset 3. | Aug. 2, 2021, midnight | |
Meta-dataset 4 | This tab contains submission results of your algorithm on Meta-dataset 4. | Aug. 2, 2021, midnight | |
Meta-dataset 5 | This tab contains submission results of your algorithm on Meta-dataset 5. | Aug. 2, 2021, midnight |
Oct. 2, 2021, 11:59 p.m.
You must be logged in to participate in competitions.
Sign In