0 minutes to go
  • 32 Participants
  • 188 Submissions
  • Competition Ends: Oct. 2, 2021, 11:59 p.m.
  • Server Time: 9:42 a.m. UTC

NeurIPS 2021 MetaDL Challenge

Track 1: Image data

This challenge aims to find meta-learning approaches effective in the few-shot learning setting for image classification tasks. The domains are very diverse (medicine, biology, chemistry, manufacturing, object recognition, people postures, etc.), and include images at different scales. The approaches staken hould be time efficient, that is, meta-learning procedures should not exceed a specific time budget. More details are available in the Evaluation tab.

The competition is divided into 2 phases: Feedback and Final. This website is the Feedback phase portal. In Feedback Phase, the participant can develop their own approach, make submissions and check out the performance on the leaderboard for each dataset. In Final Phase, the last valid submission during Feedback Phase is blind-tested on 5 unseen meta-datasets. Results in Final Phase are used for determining the winners. The participants having outperfomed the baseline will be invited to enter the final phase.

The participants need to submit their code through this platform. The submitted code must respect a specific API that is detailled in the notebook tutorial (see Data & Starting kit tab). Following this API, the participants will first "meta-train" a meta-learner resulting in a learner. This learner can then be trained on an unseen meta-test tasks and be evaluated on unlabelled examples of the same task. A participant's submission will be evaluated by the capacity of this learner to quickly adapt to new unseen tasks. Please refer to the Evaluation tab for more details.

Prizes: $500 1st place, $300 2nd place, $200 3rd place

Sponsors: ChaLearn, Microsoft, Google, and 4Paradigm

Starting kit: Dedicated GitHub repository.

Questions: Contact the organizers.

MetaDL Evaluation

This webstite is dedicated to the Feedback phase.

This challenge follows a previous chellenge held in conjunction with AAAI 2021. We call Public Phase the phase preceding the start of the Feedback Phase, during which the participants can practice on small datasets from the previous challenge.

For both Feedback Phase and Public Phase, the performance of a meta-learning algorithm is measured through the evaluation of 600 episodes at meta-test time. The participant needs to implement a MyMetaLearner class that can meta-fit a meta-train set and produce a Learner object, which in turn can fit any support set (a.k.a training set), generated from a meta-test set, and produce a Predictor. The accuracy of these predictors on each query set (or test set) is then averaged to produce a final score. In Feedback Phase, this score is used to form a leaderboard. In Final Phase, this score is used as the criterion for deciding winners (and a leadberboard will also be released). One important aspect of the challenge is that submissions must produce a Learner within 2 hours for each meta-dataset (10 hours in total for all 5 meta-datasets). Each submission has access to 4 GPU Tesla M60 for this amount of time.

Note that the participant is responsible for guaranteeing that their submission terminates within the time limit. Otherwise the submission will still consume all the time limit quota of the associated submission; a

SoftTimeLimitExceeded

 error will be thrown and the submission will get negative scores.

Episodes at meta-test time

For each meta-dataset, we use the 5-way 5-shots few-shot learning setting at meta-test time:

Support set: 5 classes and 5 examples per class (labeled examples)
Query set: 5 classes and 19 examples per class (unlabeled examples)

After we compute the accuracy for all 5 meta-datasets, the overall ranking is used as the final score for evaluation and will be used in the learderboard. It is computed by averaging the ranks (among all participants) of accuracies obtained on the 5 meta-datasets.

Challenge Rules

  • General Terms: This challenge is governed by the General ChaLearn Contest Rule Terms, the Codalab Terms and Conditions, and the specific rules set forth.
  • Announcements: To receive announcements and be informed of any change in rules, the participants must provide a valid email.
  • Conditions of participation: Participation requires complying with the rules of the challenge. Prize eligibility is restricted by US government export regulations, see the General ChaLearn Contest Rule Terms. The organizers, sponsors, their students, close family members (parents, sibling, spouse or children) and household members, as well as any person having had access to the truth values or to any information about the data or the challenge design giving him (or her) an unfair advantage, are excluded from participation. A disqualified person may submit one or several entries in the challenge and request to have them evaluated, provided that they notify the organizers of their conflict of interest. If a disqualified person submits an entry, this entry will not be part of the final ranking and does not qualify for prizes. The participants should be aware that ChaLearn and the organizers reserve the right to evaluate for scientific purposes any entry made in the challenge, whether or not it qualifies for prizes.
  • Dissemination: The challenge is part of the official selection of the NeurIPS 2021 conference. Top ranking participants will be invited to contribute to a collective paper on the results and analysis of the challenge, to appear in the Proceedings of Machine Learning Research (PMLR).
  • Registration: The participants must register to Codalab and provide a valid email address. Teams must register only once and provide a group email, which is forwarded to all team members. Teams or solo participants registering multiple times to gain an advantage in the competition may be disqualified.
  • Anonymity: The participants who do not present their results at the workshop can elect to remain anonymous by using a pseudonym. Their results will be published on the leaderboard under that pseudonym, and their real name will remain confidential. However, the participants must disclose their real identity to the organizers to claim any prize they might win. See our privacy policy for details.
  • Submission method: The results must be submitted through this CodaLab competition site. The number of submissions per day and maximum total computational time are restrained and subject to change, according to the number of participants. Using multiple accounts to increase the number of submissions in NOT permitted. In case of problem, send email to metadl@chalearn.org. The entries must be formatted as specified on the Instructions page.
  • Reproducibility: The participant should make efforts to guarantee the reproducibility of their method, for example by fixing all random seeds involved. The organizers reserve the right to eliminate entries with abnormally large variance in terms of performance or to run multiple times the same entry and select the run yielding the WORST result.
  • Final phase qualification: Only participants having observed the rules, and having beaten the baseline (fo-MAML) results in the Feedback phase, will qualify for the Final Phase.
  • Prizes: The three top ranking participants in the Final phase blind testing may qualify for prizes. The last valid submission in Feedback Phase will be automatically submitted to the Final Phase for final evaluation. The participant must fill out a fact sheet (TBA) briefly describing their methods. There is no other publication requirement. The winners will be required to make their code publicly available under an OSI-approved license such as, for instance, Apache 2.0, MIT or BSD-like license, if they accept their prize, within a week of the deadline for submitting the final results. Entries exceeding the time budget will not qualify for prizes. In case of a tie, the prize will go to the participant who submitted his/her entry first. Non winners or entrants who decline their prize retain all their rights on their entries and are not obliged to publicly release their code.

Credits

This challenge would not have been possible without the help of many people:

  • Adrian El Baz (Université Paris-Saclay, France)

  • Isabelle Guyon (Université Paris-Saclay; INRIA, France and ChaLearn, USA)

  • Jennifer He (4Paradigm, China)

  • Mike Huisman (Leiden University, the Netherlands)

  • Zhengying Liu (Université Paris-Saclay, France)

  • Yui Man Lui (University of Central Missouri)

  • Felix Mohr (Universidad de la Sabana)

  • Jan N. van Rijn (Leiden University, Netherlands)

  • Sebastien Treguer (Université Paris-Saclay, France)

  • Wei-Wei Tu (4Paradigm, China)

  • Jun Wan (Chinese Academy of Sciences, China)

  • Lisheng Sun (Université Paris-Saclay, France)

  • Haozhe Sun (Université Paris-Saclay; LISN, France)

  • Phan Anh Vu (Université Paris-Saclay; LISN, France)

  • Ihsan Ullah (Université Paris-Saclay; LISN, France)

  • Joaquin Vanschoren (Eindhoven University, the Netherlands)

  • Jun Wan (Chinese Academy of Science, China)

  • Benjia Zhou (Chinese Academy of Sciences, China)

The challenge is running on the Codalab platform, administered by Université Paris-Saclay and maintained by:

  • Tyler Thomas (CKCollab, USA)

  • Loic Sarrazin (Artelys, France)

  • Anne-Catherine Letournel (UPSaclay, France)

  • Adrien Pavao (UPSaclay, France)

ChaLearn is the challenge organization coordinator. Microsoft and Google are the primary sponsors of the challenge. 4Paradigm donated prizes, datasets, and contributed to the protocol, baselines methods and beta-testing. Other institutions of the co-organizers provided in-kind contributions, including datasets, data formatting, baseline methods, and beta-testing.

Contact the organizers.

Data & Starting Kit

During the challenge, each phase (i.e. Feedback Phase or Final Phase) has 5 meta-datasets, each from a different image domain. Each meta-dataset is used to generate few-shot learning tasks via the same API as the Meta-dataset repo. For more details, please refer to the provided starting kit which can be downloaded/cloned in the following.

Download/Clone the Starting Kit

This starting kit provides a walkthrough to making a valid submission with an example meta-dataset: Omniglot.

For a quick submission, you can directly download this mysubmission.zip file and submit it by clicking on the button "Upload a Submission". This file can also be generated by going through the tutorial notobook in the starting kit.

All images in the challenge are of shape [128,128,3].

Feedback data: The data is kept hidden but you can get quick feedback on the leaderboard.

Final data: This phase will start at the very end of the challenge. The data is kept hidden.

All meta-datasets

Start: Aug. 2, 2021, midnight

Description: Please make submissions by clicking on 'Upload a Submission' button. Then you can view the submission results of your algorithm on each meta-dataset (Meta-dataset 1, Meta-dataset 2, etc) in the corresponding Results Tab.

Datasets:

Color Label Description Start
Meta-dataset 1 This tab contains submission results of your algorithm on Meta-dataset 1. Aug. 2, 2021, midnight
Meta-dataset 2 This tab contains submission results of your algorithm on Meta-dataset 2. Aug. 2, 2021, midnight
Meta-dataset 3 This tab contains submission results of your algorithm on Meta-dataset 3. Aug. 2, 2021, midnight
Meta-dataset 4 This tab contains submission results of your algorithm on Meta-dataset 4. Aug. 2, 2021, midnight
Meta-dataset 5 This tab contains submission results of your algorithm on Meta-dataset 5. Aug. 2, 2021, midnight

Competition Ends

Oct. 2, 2021, 11:59 p.m.

You must be logged in to participate in competitions.

Sign In