0 minutes to go
  • 10 Participants
  • 12 Submissions
  • 4000 Prize
  • Competition Ends: Oct. 23, 2019, 7 a.m.
  • Server Time: 12:21 a.m. UTC

Automated Speech Classification

Asian Conference on Machine Learning (ACML) 2019 will be held in WINC AICHI, Nagoya, Japan from November 17 to 19, 2019. AutoSpeech is one of the competitions in main conference provided by 4Paradigm, ChaLearn and Google.

In the last decade, deep learning (DL) has achieved remarkable success in speech-related tasks, e.g., speaker verification, language Identification and emotion classification. However, in practice, it is very difficult to switch between different tasks without human efforts. To address this problem, Automated Deep Learning (AutoDL) is proposed to explore automatic pipeline to train an effective DL model given a specific task requirement. Since its proposal, AutoDL have been explored in various applications, and a series of AutoDL competitions, e.g., Automated natural language processing (AutoNLP) and Automated computer vision (AutoCV),  have been organized by 4Paradigm, Inc. and ChaLearn (sponsored by Google). These competitions have drawn a lot of attention from both academic researchers and industrial practitioners. 

In this challenge, we further propose the Automated Speech (AutoSpeech) competition which aims at proposing automated solutions for speech-related tasks. This challenge is restricted to multi-label classification problems, which come from different speech classification domains. The provided solutions are expected to discover various kinds of paralinguistic speech attribute information, such as speaker, language, emotion, etc, when only raw data (speech features) and meta information are provided. There are two kinds of datasets, which correspond to public and private leaderboard respectively. Five public datasets (without labels in the testing part) are provided to the participants for developing AutoSpeech solutions. Afterward, solutions will be evaluated with five unseen datasets without human intervention. The results of these five datasets determine the final ranking.
This is the first AutoSpeech competition, and that focuses on speech categorization this time, which will pose new challenges to the participants, as listed below:
- How to automatically discover various kinds of paralinguistic information in spoken conversation?
- How to automatically extract useful features for different tasks from speech data?
- How to automatically handle both long and short duration speech data?
- How to automatically design effective neural network structures?
- How to build and automatically adapt pre-trained models?

Additionally, participants should also consider:
- How to automatically and efficiently select appropriate machine learning model and hyper-parameters?
- How to make the solution more generic, i.e., how to make it applicable for unseen tasks?
- How to keep the computational and memory cost acceptable? 
 

Visit "Get Started" and follow the steps to participate.
 

Timeline 

Beijing Time (UTC+8)

  • Sep 16th, 2019, 23:59: Beginning of the feedback Phase, the release of practice datasets. Participants can start submitting codes and obtaining immediate feedback in the leaderboard.
  • Oct 11th, 2019, 23:59: End of real personal identification
  • Oct 16th, 2019, 23:59: End of the feedback Phase.
  • Oct 17th, 2019, 00:00: Beginning of the check Phase.
  • Oct 19th, 2019, 00:00: Check Phase submission deadline .
  • Oct 21st, 2019, 15:00: Check Phase result announcement.
  • Oct 21st, 2019, 15:00: Beginning of Final Phase.
  • Oct 23rd, 2019, 15:00: Re-submission deadline.
  • Oct 25th, 2019, 20:00: End of Final Phase.

Note that the CodaLab platform uses UTC time format, please pay attention to the time descriptions elsewhere on this page so as not to mistake the time points for each phase of the competition.

 

Prizes

  • 1st  Place: $2000
  • 2nd Place: $1500
  • 3rd Place: $500
 

About

Please contact the organizers if you have any problem concerning this challenge.

Sponsors

4paradigm chaleangoogle

Advisors

- Wei-Wei Tu, 4Pardigm Inc., China, (Coordinator, Platform Administrator, Data Provider, Baseline Provider, Sponsor) tuweiwei@4paradigm.com

Tom Ko, Southern University of Science and Technology, China (Advisor)  tomkocse@gmail.com

- Isabelle Guyon, Universté Paris-Saclay, France, ChaLearn, USA, (Advisor, Platform Administrator) guyon@chalearn.org

- Qiang Yang, Hong Kong University of Science and Technology, Hong Kong, China, (Advisor, Sponsor) qyang@cse.ust.hk

Committee (alphabetical order)

- Jingsong Wang, 4Paradigm Inc., China, (Dataset provider, baseline) wangjingsong@4paradigm.com

- Ling Yue, 4Paradigm Inc., China, (Admin) yueling@4paradigm.com

- Shouxiang Liu, 4Paradigm Inc., China, (Admin) liushouxiang@4paradigm.com

- Xiawei Guo, 4Paradigm Inc., China, (Admin) guoxiawei@4paradigm.com

- Zhengying Liu, U. Paris-Saclay; U. PSud, France, (Platform Provider) zhengying.liu@inria.fr

- Zhen Xu, 4Paradigm Inc., China, (Admin) xuzhen@4paradigm.com

 

 

Organization Institutes

4paradigm chaleangooglesustech

About AutoML 

Previous AutoML Challenges:

First AutoML Challenge

AutoML@PAKDD2018

AutoML@NeurIPS2018

- AutoML@PAKDD2019

- AutoML@KDDCUP2019

- AutoCV@IJCNN2019

- AutoCV2@ECML PKDD2019

- AutoNLP@WAIC2019

About 4Paradigm Inc.

Founded in early 2015, 4Paradigm is one of the world’s leading AI technology and service providers for industrial applications. 4Paradigm’s flagship product – the AI Prophet – is an AI development platform that enables enterprises to effortlessly build their own AI applications, and thereby significantly increase their operation’s efficiency. Using the AI Prophet, a company can develop a data-driven “AI Core System”, which could be largely regarded as a second core system next to the traditional transaction-oriented Core Banking System (IBM Mainframe) often found in banks. Beyond this, 4Paradigm has also successfully developed more than 100 AI solutions for use in various settings such as finance, telecommunication and internet applications. These solutions include, but are not limited to, smart pricing, real-time anti-fraud systems, precision marketing, personalized recommendation and more. And while it is clear that 4Paradigm can completely set up a new paradigm that an organization uses its data, its scope of services does not stop there. 4Paradigm uses state-of-the-art machine learning technologies and practical experiences to bring together a team of experts ranging from scientists to architects. This team has successfully built China’s largest machine learning system and the world’s first commercial deep learning system. However, 4Paradigm’s success does not stop there. With its core team pioneering the research of “Transfer Learning,” 4Paradigm takes the lead in this area, and as a result, has drawn great attention of worldwide tech giants. 

About ChaLearn

ChaLearn is a non-profit organization with vast experience in the organization of academic challenges. ChaLearn is interested in all aspects of challenge organization, including data gathering procedures, evaluation protocols, novel challenge scenarios (e.g., competitions), training for challenge organizers, challenge analytics, result dissemination and, ultimately, advancing the state-of-the-art through challenges.

About Google

Google was founded in 1998 by Sergey Brin and Larry Page that is a subsidiary of the holding company Alphabet Inc. More than 70 percent of worldwide online search requests are handled by Google, placing it at the heart of most Internet users’ experience. Its headquarters are in Mountain View, California. Google began as an online search firm, but it now offers more than 50 Internet services and products, from e-mail and online document creation to software for mobile phones and tablet computers. It is considered one of the Big Four technology companies, alongside Amazon, Apple and Facebook.

 

Quick start

Baseline, Starting kit, Practice datasets can be downloaded here:

 

Baseline

This is a challenge with code submission. We provide one baseline above for test purposes.

To make a test submission, download the starting kit and follow the readme.md file instruction. click on the blue button "Upload a Submission" in the upper right corner of the page and re-upload it. You must click first the orange tab "Feedback Phase" if you want to make a submission simultaneously on all datasets and get ranked in the challenge. You may also submit on a single dataset at a time (for debug purposes). To check progress on your submissions goes to the "My Submissions" tab. Your best submission is shown on the leaderboard visible under the "Results" tab.

Starting kit

The starting kit contains everything you need to create your own code submission (just by modifying the file model.py) and to test it on your local computer, with the same handling programs and Docker image as those of the Codalab platform (but the hardware environment is in general different). 

The starting kit contains toy sample data. Besides that, five practice datasets are also provided so that you can develop your AutoSpeech solutions offline. These five practice datasets can be downloaded from the link at the beginning.

Note that the version of cuda in this docker is 10, if the cuda version on your own PCs is less than 10, it may cause you to be unable to use the GPU in this docker.

Local development and testing:

You can test your code in the exact same environment as the Codalab environment using docker. You are able to run the ingestion program (to produce predictions) and the scoring program (to evaluate your predictions) on toy sample data.

1. If you are new to docker, install docker from https://docs.docker.com/get-started/.

2. At the shell, change to the starting-kit directory, run

  (CPU) docker run -it -v "$(pwd):/app/codalab" nehzux/autospeech:gpu

  (GPU) docker run --gpus '"device=0"' -it -v "$(pwd):/app/codalab" nehzux/autospeech:gpu

(Note that for running docker with GPU, you need to install Nvidia-docker first.)

3. Now you are in the bash of the docker container, run the local test program

  python run_local_test.py -dataset_dir=path_to_dataset -code_dir=path_to_model_file

It runs ingestion and scoring program simultaneously, and the predictions and scoring results are in sample_result_submissions and scoring_output directory. 

Submission

Interface

The interface is simple and generic: you must supply a Python model.py, where a Model class is defined with:

  • a constructor
  • a train method
  • a test method
  • a done_training attribute

The python version on our platform is 3.6.8. Below we define the interface of Model class in detail.

__init__(self, meta_data):

  • meta_data: a python dictionary contains the meta information about the dataset, described in "Get Started - Dataset" page. 

train(self, training_data, remaining_time_budget):

  • training_data: a two-tuple (a, b)
    • a: a list of vectors, input speech raw data.
    • b: a list of lists of int, the label of the training data in one-hot format.
  • remaining_time_budget: a scalar that indicates the remaining time (sec) for the submission process.

test(self, test_data, remaining_time_budget):  

  • test_data:a list of vectors, input speech raw data.
  • remaining_time_budget: a scalar that indicates the remaining time (sec) for the submission process.

To make submissions, zip model.py and its dependency files (without the directory), then use the "Upload a Submission" button. Please note that you must click first the orange tab "On-line Phase" if you want to make a submission simultaneously on all datasets and get ranked in the challenge. You may also submit on a single dataset at a time (for debug purposes). Besides that, the ranking in the public leaderboard is determined by the LAST code submission of the participants.

Computational limitations

  • A submission on one dataset is limited to a maximum of 30 minutes (with extra 20 mins of initialization).
  • We currently limit participants to 500 minutes of computing time per day (subject to change, depending on the number of participants).
  • Participants are limited to 2 submissions per day per dataset. However, we do not encourage to make a submission on a single dataset at a time, and the submission on all datasets will be failed if the number of submissions of one dataset is exhausted.

Running Environment

In the starting-kit, we provide a docker that simulates the running environment of our challenge platform. Participants can check the python version and installed python packages with the following commands:

 python --version

 pip list

On our platform, for each submission, the allocated computational resources are:

  • CPU: 4 Cores
  • GPU: a NVIDIA Tesla P100 GPU (running CUDA 10 with drivers cuDNN 7.5) 
  • Memory: 26 GB
  • Disk: 100 GB

Challenge Rules

  • General Terms: This challenge is governed by the General ChaLearn Contest Rule Terms, the Codalab Terms and Conditions, and the specific rules set forth.
  • Announcements: To receive announcements and be informed of any change in rules, the participants must provide a valid email.
  • Conditions of participation: Participation requires complying with the rules of the challenge. Prize eligibility is restricted by Chinese government export regulations, see the General ChaLearn Contest Rule Terms. The organizers, sponsors, their students, close family members (parents, sibling, spouse or children) and household members, as well as any person having had access to the truth values or to any information about the data or the challenge design giving him (or her) an unfair advantage, are excluded from participation. A disqualified person may submit one or several entries in the challenge and request to have them evaluated, provided that they notify the organizers of their conflict of interest. If a disqualified person submits an entry, this entry will not be part of the final ranking and does not qualify for prizes. The participants should be aware that ChaLearn and the organizers reserve the right to evaluate for scientific purposes any entry made in the challenge, whether or not it qualifies for prizes.
  • Dissemination: The challenge is orgnized in conjunction with the ACML 2019 conference. Top ranking participants will be invited to submit a paper to a special issue on Automated Machine learning of the IEEE transactions PAMI.
  • Registration: The participants must register to Codalab and provide a valid email address. Teams must register only once and provide a group email, which is forwarded to all team members. Teams or solo participants registering multiple times to gain an advantage in the competition may be disqualified. One participant can only be registered in one team. Note that you can join the challenge until one week before the end of feedback phase. Real personal identifications will be required (notified by organizers) at the end of feedback phase to avoid deplicate accounts and award claim.
  • Anonymity: The participants who do not present their results at the challenge session can elect to remain anonymous by using a pseudonym. Their results will be published on the leaderboard under that pseudonym, and their real name will remain confidential. However, the participants must disclose their real identity to the organizers to join check phase and final phase and to claim any prize they might win. See our privacy policy for details.
  • Submission method: The results must be submitted through this CodaLab competition site. The number of submissions per day is 2 and maximum total computational time per day is 500 minutes. Using multiple accounts to increase the number of submissions in NOT permitted. In case of problem, send email to autospeech2019@4paradigm.com. The entries must be formatted as specified on the Instructions page.
  • Prizes: The top 10 ranking participants in the Final Phase may qualify for prizes. To compete for prizes, the participants must make a valid submission on the Final Phase website (TBA), and fill out a fact sheet briefly describing their methods before the announcement of the final winners. There is no other publication requirement. The winners will be required to make their code publicly available under an OSI-approved license such as, for instance, Apache 2.0, MIT or BSD-like license, if they accept their prize, within a week of the deadline for submitting the final results. Entries exceeding the time budget will not qualify for prizes. In case of a tie, the prize will go to the participant who submitted his/her entry first. Non winners or entrants who decline their prize retain all their rights on their entries and are not obliged to publicly release their code.
  • Cheating: We forbid people during the development phase to attempt to get a hold of the solution labels on the server (though this may be technically feasible). For the final phase, the evaluation method will make it impossible to cheat in this way. Generally, participants caught cheating will be disqualified.

Dataset

This page describes the datasets used in AutoSpeech challenge. 15 speech categorization datasets are prepared for this competition. Five practice datasets, which can be downloaded, are provided to the participants so that they can develop their AutoSpeech
solutions offline. Besides that, another five validation datasets are also provided to participants to evaluate the public leaderboard scores of their AutoSpeech solutions. Afterward, their solutions will be evaluated with five test datasets without human intervention. 
 

Description

Each provided dataset is from one of five different speech classification domains: Speaker Identification, Emotion Classification, Accent Recognition, Language Identification and Music Genre Classification. In the datasets, the number of classes is greater than 2 and less than 100, while the number of instances varies from hundredes to thousands. All the audios are first converted to single-channel, 16-bit streams at a 16kHz sampling rate for consistency, then they are loaded by librosa and dumped to pickle format (A list of vectors, which contains all train or test audios in one dataset). Note that, datasets contain both long audios and short audios without padding.
 
 

Components

All the datasets consist of content file, label file and meta file, where content file and label file are split into train parts and test parts:
  • Content file All the datasets consist of audio file, label file and meta file, where audio file and label file are split into train parts and test parts: Audio file ({train,test}.pkl) contains the samples of the audios, which format is a list of vectors.

    Example:

    [
    [-1.2207031e-04, 3.0517578e-05, -1.5258789e-04, ..., -8.8500977e-04, -8.5449219e-04, -1.3732910e-03]),
    [ 9.1552734e-05, 7.0190430e-04, 1.0375977e-03, ..., -7.6293945e-04, 2.7465820e-04, 1.0375977e-03]),
    [ 1.8920898e-03, 1.6784668e-03, 1.4648438e-03, ..., 3.0517578e-05, -2.7465820e-04, -3.0517578e-04]),
    [0.02307129, 0.02386475, 0.02462769, ..., 0.02420044, 0.02410889, 0.02429199]),
    [ 6.1035156e-05, 1.2207031e-04, 4.5776367e-04, ..., -1.2207031e-04, -6.1035156e-04, -3.6621094e-04]),
    [0.03787231, 0.03686523, 0.03723145, ..., 0.03497314, 0.03594971, 0.0350647 ]),
    ...,
    ]
  • Label file ({train, dataset_name}.solution) consists of the labels of the instances in one-hot format. Note that each of its lines corresponds to the corresponding line number in the content file.
    Example:

    1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    

 

  • Meta file (meta.json) is a json file consisted of the meta information about the dataset. Descriptions of the keys
    in meta file:

    class_num : number of classes in the dataset
    train_num : the number of training instances
    test_num : the number of test instances
    time_budget : the time budget of the dataset, 1800s for all the datasets
  • Example:

    {
      "class_num": 10,
      "train_num": 428,
      "test_num": 107,
      "time_budget": 1800
    }
 

Credits

We thank the following sources for providing us with these wonderful datasets:

-  A. Nagrani*, J. S. Chung*, A. Zisserman  VoxCeleb: a large-scale speaker identification dataset  INTERSPEECH, 2017.

- Weinberger, Steven. (2015). Speech Accent Archive. George Mason University. Retrieved from http://accent.gmu.edu

http://www.expressive-speech.net/, Berlin emotional speech database

- CSS10: A Collection of Single Speaker Speech Datasets for 10 Languages https://arxiv.org/abs/1903.11269

- D. Ellis (2007). Classifying Music Audio with Timbral and Chroma Features,Proc. Int. Conf. on Music Information Retrieval ISMIR-07, Vienna, Austria, Sep. 2007.

 

 

Evaluation

Competition protocol

This challenge has three phases. The participants are provided with five practice datasets which can be downloaded, so that they can develop their AutoSpeech solutions offline. Then, the code will be uploaded to the platform and participants will receive immediate feedback on the performance of their method at another five validation datasets. After feedback phase terminates, we will have another check phase, where participants are allowed to submit their code only once on private datasets in order to debug. Participants won't be able to read detailed logs but they are able to see whether their code report errors. Last, in the Final Phase, Participants’ solutions will be evaluated on five test datasets. The ranking in the final phase will count towards determining the winners.

Code submitted is trained and tested automatically, without any human intervention. Code submitted on feedback (resp. final) phase is run on all five feedback (resp. final) datasets in parallel on separate compute workers, each one with its own time budget. 

The identities of the datasets used for testing on the platform are concealed. The data are provided in a raw form (no feature extraction) to encourage researchers to use Deep Learning methods performing automatic feature learning, although this is NOT a requirement. All problems are multi-class classification problems. The tasks are constrained by the time budget (30 minutes/dataset)

Here is some pseudo-code of the evaluation protocol:

# For each dataset, our evaluation program calls the model constructor:

# The total time of import Model and initialization of Model should not exceed 20 minutes
from model import Model M = Model(metadata=dataset_metadata) remaining_time budget = overall_time_budget # Ingestion program calls multiple times train and test: repeat until M.done_training or remaining_time_budget < 0 { # Only the runtime of the train and test function will be counted into the time budget
start_time = time.time()
M.train
(training_data, remaining_time_budget) remaining_time_budget -= time.time() - start_time
start_time = time.time() results = M.test(test_data, remaining_time_budget) remaining_time_budget -= time.time() - start_time
# Results made available to scoring program (run in separate container) save(results) }

It is the responsibility of the participants to make sure that neither the "train" nor the "test" methods exceed the “remaining_time_budget”. The method “train” can choose to manage its time budget such that it trains in varying time increments.  Note that, the model will be initialized only one time during the submission process, so the participants can control the model behavior at each train step by its member variables. There is a pressure that it does not use all "overall_time_budget" at the first iteration because we use the area under the learning curve as the metric. Besides that, the total time of import Model and initialization of Model should not exceed 20 minutes.

Metrics

The participants can train in batches of pre-defined duration to incrementally improve their performance until the time limit is attained. In this way, we can plot learning curves: "performance" as a function of time. Each time the "train" method terminates, the "test" method is called and the results are saved, so the scoring program can use them, together with their timestamp.

For multi-class problems, each label/class is considered a separate binary classification problem, and we compute the normalized AUC (or Gini coefficient)

    2 * AUC - 1

as the score for each prediction, here AUC is the average of the usual area under ROC curve (ROC AUC) of all the classes in the dataset.

For each dataset, we compute the Area under Learning Curve (ALC). The learning curve is drawn as follows:

  • at each timestamp t, we compute s(t), the normalized AUC (see above) of the most recent prediction. In this way, s(t) is a step function w.r.t time t;
  • in order to normalize time to the [0, 1] interval, we perform a time transformation by

    where T is the time budget (of default value 1800 seconds = 30 minutes) and t0 is a reference time amount (of default value 60 seconds).
  • then compute the area under learning curve using the formula

    we see that s(t) is weighted by 1/(t + t0)), giving a stronger importance to predictions made at the beginning of the learning curve.

After we compute the ALC for all 5 datasets, the overall ranking is used as the final score for evaluation and will be used in the leaderboard. It is computed by averaging the ranks (among all participants) of ALC obtained on the 5 datasets.

Examples of learning curves:

 

 

FAQs

Can organizers compete in the challenge?

No, they can make entries that show on the leaderboard for test purposes and to stimulate participation, but they are excluded from winning prizes.

Are there prerequisites to enter the challenge?

No, except accepting the TERMS AND CONDITIONS.

Can I enter any time?

No, you can join the challenge until one week before the end of feedback phase. After that, we will require real personal identification (notified by organizers) to avoid duplicate accounts.

Where can I download the data?

You can download "practice datasets" only from the Instructions page. The data on which your code is evaluated cannot be downloaded, it will be visible to your code only, on the Codalab platform.

How do I make submissions?

To make a valid challenge entry, click the blue button on the upper right side "Upload a Submission". This will ensure that you submit on all 5 datasets of the challenge simultaneously. You may also make a submission on a single dataset for debug purposes, but it will not count towards the final ranking.

Do you provide tips on how to get started?

We provide a Starting Kit in Python with step-by-step instructions in "README.md".

Are there publication opportunities?

Yes. Top ranking participants will be invited to submit papers to a special issue of the IEEE transaction journal PAMI on Automated Machine Learning and will be entered in a contest for the best paper. Deadline March 15th 2020.

There will be 2 best paper awards of $1000 ("best paper" and "best student paper").

Are there prizes?

Yes, a $4000 prize pool.

  1st place 2nd place 3rd place
Prize $2000 $1500 $500

Do I need to submit code to participate?

Yes, participation is by code submission.

When I submit code, do I surrender all rights to that code to the SPONSORS or ORGANIZERS?

No. You just grant to the ORGANIZERS a license to use your code for evaluation purposes during the challenge. You retain all other rights.

If I win, I must submit a fact sheet, do you have a template?

Yes, we will provide the fact sheet in a suitable time.

What is your CPU/GPU computational configuration?

We are running your submissions on Google Cloud NVIDIA Tesla P100 GPUs. In non peak times we are planning to use 10 workers, each of which will have one NVIDIA Tesla P100 GPU (running CUDA 10 with drivers cuDNN 7.5) and 4 vCPUs, with 26 GB of memory, 100 GB disk.

The PARTICIPANTS will be informed if the computational resources increase. They will NOT decrease.

Can I pre-train a model on my local machine and submit it?

This is not explicitly forbidden, but it is discouraged. We prefer if all calculations are performed on the server. If you submit a pre-trained model, you will have to disclose it in the fact sheets. 

Will there be a final test round on separate datasets?

YES. The ranking of participants will be made from a final blind test made by evaluating a SINGLE SUBMISSION made on the final test submission site. The submission will be evaluated on five new test datasets in a completely "blind testing" manner. The final test ranking will determine the winners.

What is my time budget?

20 min is granted for initialization. Each execution must run in less than 30 minutes (1800 seconds) for each dataset. Your cumulative time is limited to 500 minutes per day in total.

Does the time budget correspond to wall time or CPU/GPU time?

Wall time.

My submission seems stuck, how long will it run?

In principle no more than its time budget. We kill the process if the time budget is exceeded. Submissions are queued and run on a first time first serve basis. We are using several identical servers. Contact us if your submission is stuck for more than 24 hours. Check on the leaderboard the execution time.

How many submissions can I make?

Two per day, but up to a total computational time of 500 minutes (submissions taking longer will be aborted). This may be subject to change, according to the number of participants. Please respect other users. It is forbidden to register under multiple user IDs to gain an advantage and make more submissions. Violators will be DISQUALIFIED FROM THE CONTEST.

Do my failed submissions count towards my number of submissions per day?

No. Please contact us if you think the failure is due to the platform rather than to your code and we will try to resolve the problem promptly.

What happens if I exceed my time budget?

This should be avoided. In the case where a submission exceeds 30 minutes of time budget for a particular task (dataset), the submission handling process (ingestion program in particular) will be killed when time budget is used up and predictions made so far (with their corresponding timestamps) will be used for evaluation. In the other case where a submission exceeds the total compute time per day, all running tasks will be killed by CodaLab and the status will be marked 'Failed' and a score of -1.0 will be produced.

The time budget is too small, can you increase it?

No, sorry, not for this challenge.

What metric are you using?

All problems are multi-class problems and we treat them as multiple 2-class classification problems. For a given dataset, all binary classification problems are scored with the ROC AUC and results are averaged (overall classes/binary problems). For each time step at which you save results, this gives you one point on the learning curve. The final score for one dataset is the area under the learning curve. The overall score on all 5 datasets is the average rank on the 5 datasets. For more details, go to 'Get Started' -> 'Evaluation' -> 'Metrics' section.

Which version of Python are you using?

The code was tested under Python 3.6.8. We are running Python 3.6.8 on the server and the same libraries are available.

Can I use something else than Python code?

Yes. Any Linux executable can run on the system, provided that it fulfills our Python interface and you bundle all necessary libraries with your submission.

Do I have to use TensorFlow?

No. 

Which docker are you running on Codalab?

nehzux/autospeech:gpu, see some instructions on dockerhub.

How do I test my code in the same environment that you are using before submitting?

When you submit code to Codalab, your code is executed inside a Docker container. This environment can be exactly reproduced on your local machine by downloading the corresponding docker image. The docker environment of the challenge contains Anaconda libraries, TensorFlow, and PyTorch (among other things).  

What is meant by "Leaderboard modifying disallowed"?

Your last submission is shown automatically on the leaderboard. You cannot choose which submission to select. If you want another submission than the last one you submitted to "count" and be displayed on the leaderboard, you need to re-submit it.

Can I register multiple times?

No. If you accidentally register multiple times or have multiple accounts from members of the same team, please notify the ORGANIZERS. Teams or solo PARTICIPANTS with multiple accounts will be disqualified.

How can I create a team?

We have disabled Codalab team registration. To join as a team, just share one account with your team. The team leader is responsible for making submissions and observing the rules.

How can I destroy a team?

You cannot. If you need to destroy your team, contact us.

Can I join or leave a team?

It is up to you and the team leader to make arrangements. However, you cannot participate in multiple teams.

Can cheat by trying to get a hold of the evaluation data and/or future frames while my code is running?

No. If we discover that you are trying to cheat in this way you will be disqualified. All your actions are logged and your code will be examined if you win.

Can I give an arbitrary hard time to the ORGANIZERS?

ALL INFORMATION, SOFTWARE, DOCUMENTATION, AND DATA ARE PROVIDED "AS-IS". UPSUD, CHALEARN, IDF, AND/OR OTHER ORGANIZERS AND SPONSORS DISCLAIM ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR ANY PARTICULAR PURPOSE, AND THE WARRANTY OF NON-INFRIGEMENT OF ANY THIRD PARTY'S INTELLECTUAL PROPERTY RIGHTS. IN NO EVENT SHALL ISABELLE GUYON AND/OR OTHER ORGANIZERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF SOFTWARE, DOCUMENTS, MATERIALS, PUBLICATIONS, OR INFORMATION MADE AVAILABLE FOR THE CHALLENGE. In case of dispute or possible exclusion/disqualification from the competition, the PARTICIPANTS agree not to take immediate legal action against the ORGANIZERS or SPONSORS. Decisions can be appealed by submitting a letter to the CHALEARN president, and disputes will be resolved by the CHALEARN board of directors. See contact information.

Where can I get additional help?

For questions of general interest, THE PARTICIPANTS should post their questions to the forum.

Other questions should be directed to the organizers.

Check Phase

Start: Oct. 13, 2019, 4 p.m.

Description: Please make submissions by clicking on following 'Submit' button. Then you can view the submission results of your algorithm on each dataset in corresponding tab (Dataset 1, Dataset 2, etc).

Datasets:

Color Label Description Start
Dataset 1 This tab contains submission results of your algorithm on Dataset 1. Oct. 13, 2019, 4 p.m.
Dataset 2 This tab contains submission results of your algorithm on Dataset 2. Oct. 16, 2019, 4 p.m.
Dataset 3 This tab contains submission results of your algorithm on Dataset 3. Oct. 16, 2019, 4 p.m.
Dataset 4 This tab contains submission results of your algorithm on Dataset 4. Oct. 16, 2019, 4 p.m.
Dataset 5 This tab contains submission results of your algorithm on Dataset 5. Oct. 16, 2019, 4 p.m.

Competition Ends

Oct. 23, 2019, 7 a.m.

You must be logged in to participate in competitions.

Sign In