Note that the CodaLab platform uses UTC time format, please pay attention to the time descriptions elsewhere on this page so as not to mistake the time points for each phase of the competition.
Please contact the organizers if you have any problem concerning this challenge.
- Wei-Wei Tu, 4Pardigm Inc., China, (Coordinator, Platform Administrator, Data Provider, Baseline Provider, Sponsor) tuweiwei@4paradigm.com
- Isabelle Guyon, Universté Paris-Saclay, France, ChaLearn, USA, (Advisor, Platform Administrator) guyon@chalearn.org
- Qiang Yang, Hong Kong University of Science and Technology, Hong Kong, China, (Advisor, Sponsor) qyang@cse.ust.hk
- Chenshuo Liu, 4Paradigm Inc., China, (Admin) liuchenshuo@4paradigm.com
- Ling Yue, 4Paradigm Inc., China, (Admin) yueling@4paradigm.com
- Shouxiang Liu, 4Paradigm Inc., China, (Admin) liushouxiang@4paradigm.com
- Xiawei Guo, 4Paradigm Inc., China, (Admin) guoxiawei@4paradigm.com
- Zhen Xu, 4Paradigm Inc., China, (Admin) xuzhen@4paradigm.com
Previous AutoML Challenges:
Founded in early 2015, 4Paradigm is one of the world’s leading AI technology and service providers for industrial applications. 4Paradigm’s flagship product – the AI Sage EE– is an AI development platform that enables enterprises to effortlessly build their own AI applications, and thereby significantly increase their operation’s efficiency. Using the AI Sage EE, a company can develop a data-driven “AI Core System”, which could be largely regarded as a second core system next to the traditional transaction-oriented Core Banking System (IBM Mainframe) often found in banks. Beyond this, 4Paradigm has also successfully developed more than 100 AI solutions for use in various settings such as finance, telecommunication and internet applications. These solutions include, but are not limited to, smart pricing, real-time anti-fraud systems, precision marketing, personalized recommendation and more. And while it is clear that 4Paradigm can completely set up a new paradigm that an organization uses its data, its scope of services does not stop there. 4Paradigm uses state-of-the-art machine learning technologies and practical experiences to bring together a team of experts ranging from scientists to architects. This team has successfully built China’s largest machine learning system and the world’s first commercial deep learning system. However, 4Paradigm’s success does not stop there. With its core team pioneering the research of “Transfer Learning,” 4Paradigm takes the lead in this area, and as a result, has drawn great attention of worldwide tech giants.
ChaLearn is a non-profit organization with vast experience in the organization of academic challenges. ChaLearn is interested in all aspects of challenge organization, including data gathering procedures, evaluation protocols, novel challenge scenarios (e.g., competitions), training for challenge organizers, challenge analytics, result dissemination and, ultimately, advancing the state-of-the-art through challenges.
This is a challenge with code submission. We provide one baseline above for test purposes.
To make a submission, download the starting kit and follow the readme.md file instruction. Click on the blue button "Upload a Submission" in the upper right corner of the page and re-upload it. You must click first the orange tab "Feedback Phase" if you want to make a submission simultaneously on all datasets and get ranked in the challenge. You may also submit on a single dataset at a time (for debug purposes). To check progress on your submissions goes to the "My Submissions" tab. Your best submission is shown on the leaderboard visible under the "Results" tab.
The starting kit contains everything you need to create your own code submission (just by modifying the file model.py) and to test it on your local computer, with the same handling programs and Docker image as those of the Codalab platform (but the hardware environment is in general different).
The starting kit contains toy sample data. Besides that, 2 public datasets are also provided so that you can develop your solutions offline. These 2 public datasets can be downloaded from the link at the beginning.
You can test your code in the exact same environment as the Codalab environment using docker. You are able to run the ingestion program (to produce predictions) and the scoring program (to evaluate your predictions) on toy sample data.
1. If you are new to docker, install docker from https://docs.docker.com/get-started/.
2. At the shell, change to the starting kit directory, run
docker run -it -v "$(pwd):/app/codalab" vergilgxw/autotable:v3
3. Now you are in the bash of the docker container, run the local test program
python run_local_test.py --dataset_dir=[path_to_dataset] --code_dir=[path_to_model_file]
It runs ingestion and scoring program simultaneously, and the predictions and scoring results are in sample_result_submissions and scoring_output directory.
The interface is simple and generic: you must supply a Python model.py, whose API can be found in page "Evaluation" .
To make submissions, zip model.py and its dependency files (without the directory), then use the "Upload a Submission" button. Please note that you must click first the orange tab "Feedback Phase / Private Phase" if you want to make a submission simultaneously on all datasets and get ranked in the challenge. You may also submit on a single dataset at a time (for debug purposes). Besides that, the ranking in the public leaderboard is determined by the LAST code submission of the participants.
In the starting kit, we provide a docker that simulates the running environment of our challenge platform. Participants can check the python version and installed python packages with the following commands:
python --version
pip list
On our platform, for each submission, the allocated computational resources are:
This is the training data including target variable (regression target). Its column types could be read from info.yaml.
There are 3 data types of features, indicated by "num", "str", and "timestamp", respectively:
• num: numerical feature, a real value
• str: string or categorical features
• timestamp: time feature, an integer that indicates the UNIX timestamp
This is the test data including target variable (regression target). Its column types could be read from info.yaml.
This is the test solution (extracted from test.data).
This is the UNIQUE test timestamp (extracted from test.data).
For every dataset, we provide an info.yaml file that contains the important information (meta data).
Here we give details about info.yaml
• time_budget : the time budgets for different methods in user models
• schema : stores data type information of each column
• is_multivariate: whether there are multiple time series.
• is_relative_time: DEPRECATED, not used in this challenge.
• primary_timestamp: UNIX timestamp
• primary_id: a list of column names, identifying uniquely the time series. Note that if is_multivatriate is False, this will be an empty list.
• label: regression target
Example:
This challenge has three phases. The participants are provided with public datasets which can be downloaded, so that they can develop their AutoSeries solutions offline. Then, the code will be uploaded to the platform and participants will receive immediate feedback on the performance of their method at another 5 feedback datasets. After Feedback Phase terminates, we will have another Check Phase, where participants are allowed to submit their code only once on private datasets in order to debug. Participants won't be able to read detailed logs but they are able to see whether their code report errors. Last, in the Private Phase, Participants’ solutions will be evaluated on 5 private datasets. The ranking in Private Phase will count towards determining the winners.
Code submitted is trained and tested automatically, without any human intervention. Code submitted on Feedback (or Private) Phase runs on all 5 feedback (or private) datasets in parallel on separate compute workers, each one with its own time budgets.
The flow diagram of the running processe is shown in Figure 1.
Figure 1. The flow diagram of ingestion program.
The procedure in Figure 1 can be described as follows:
Figure 2 illustrates the predict method. Here \(X^{t}, Y^{t}\) are the samples and true labels in test dataset with timestamp t. \(\tilde Y^t\) is the predicted labels. \(\mathbb{I}_{update}^t\) indicates whether the program needs to update.
Figure 3 shows the update method. In this sub-procedure, the user program can update the model with training data and all historical data in test dataset.
Figure 2. The predict method. X, Y are samples and labels.
Figure 3. The update method.
(For more details, please check the ingestion program in starting kit)
The participant should construct a Model class in their model.py file. The interface and its description can be found in Figure 4. In Model class, 6 methods should be defined: __init__, train, predict, update, save, load.
Important remark about competition rules:
Figure 4. The interface of user program. There are 6 methods that should be defined: __init__, train, predict, update, save and load.
train, predict, update, save and load are all running with their limited time budgets. The time budgets can be found in info.yaml for each dataset. For train, save and load, each call of the method has its own time budget. For predict and update, all calls of a method share its time budget, i.e. the total running time of all calls of predict can not exceed the time budget of predict, and similar for update.
For each dataset, we compute Root-Mean-Square Error (RMSE) as the evaluation metric for this competition. Participants will be ranked according to RMSE per dataset. After we compute the RMSE for all 5 datasets, the overall ranking is used as the final score for evaluation and will be used in the leaderboard. It is computed by averaging the ranks (among all participants) of RMSE obtained on the 5 datasets.
No, they can make entries that show on the leaderboard for test purposes and to stimulate participation, but they are excluded from winning prizes.
No, except accepting the TERMS AND CONDITIONS.
No, you can join the challenge until one week before the end of Feedback Phase. After that, we will require real personal identification (notified by organizers) to avoid duplicate accounts.
You can download "public datasets" only from the Instructions page. The data on which your code is evaluated cannot be downloaded, it will be visible to your code only, on the Codalab platform.
To make a valid challenge entry, click the blue button on the upper right side "Upload a Submission". This will ensure that you submit on all datasets of the challenge simultaneously. You may also make a submission on a single dataset for debug purposes, but it will not count towards the final ranking.
We provide a starting kit in Python with step-by-step instructions in "README.md".
Yes, a $4000 prize pool.
1st place | 2nd place | 3rd place | |
Prize | $2000 | $1500 | $500 |
Yes, participation is by code submission.
No. You just grant to the ORGANIZERS a license to use your code for evaluation purposes. You retain all other rights.
Yes, we will provide the fact sheet in a suitable time.
On our platform, for each submission, allocated computational resource is:
The PARTICIPANTS will be informed if the computational resources increase. They will NOT decrease.
YES. The ranking of participants will be made from a private blind test made by evaluating a SINGLE SUBMISSION made on the private submission site. The submission will be evaluated on 5 new private datasets in a completely "blind testing" manner. The private test ranking will determine the winners.
Each execution must run in its own time budget for each dataset (provided in the metafile info.yaml).
Wall time.
In principle no more than its time budget. We kill the process if the time budget is exceeded. Submissions are queued and run on a first time first serve basis. We are using several identical servers. Contact us if your submission is stuck for more than 24 hours. Check on the leaderboard the execution time.
5 times per day. This may be subject to change, according to the number of participants. Please respect other users. It is forbidden to register under multiple user IDs to gain an advantage and make more submissions. Violators will be DISQUALIFIED FROM THE CONTEST.
Yes. Please contact us if you think the failure is due to the platform rather than to your code and we will try to resolve the problem promptly.
This should be avoided. In the case where a submission exceeds the time budget for a particular task (dataset), the submission handling process (ingestion program in particular) will be killed when time budget is used up and the prediction made will be used for evaluation. In the other case where a submission exceeds the total compute time per day, all running tasks will be killed by CodaLab and the status will be marked 'Failed' and a score of -1.0 will be produced.
No, sorry, not for this challenge.
RMSE is used per dataset. More info on evaluation can be found at "Get Started - Evaluation".
The code was tested under Python 3.6.9. We are running Python 3.6.9 on the server and the same libraries are available.
Yes. Any Linux executable can run on the system, provided that it fulfills our Python interface and you bundle all necessary libraries with your submission.
No.
vergilgxw/autotable:v3.
When you submit code to Codalab, your code is executed inside a Docker container. This environment can be exactly reproduced on your local machine by downloading the corresponding docker image.
Your last submission is shown automatically on the leaderboard. You cannot choose which submission to select. If you want another submission than the last one you submitted to "count" and be displayed on the leaderboard, you need to re-submit it.
No. If you accidentally register multiple times or have multiple accounts from members of the same team, please notify the ORGANIZERS. Teams or solo PARTICIPANTS with multiple accounts will be disqualified.
We have disabled Codalab team registration. To join as a team, just share one account with your team. The team leader is responsible for making submissions and observing the rules.
You cannot. If you need to destroy your team, contact us.
It is up to you and the team leader to make arrangements. However, you cannot participate in multiple teams.
No. Please note that you can only train/predict in the train/predict methods. save/load methods are reserved for saving/loading models only. If we discover that you are trying to cheat in this way you will be disqualified. All your actions are logged and your code will be examined if you win.
ALL INFORMATION, SOFTWARE, DOCUMENTATION, AND DATA ARE PROVIDED "AS-IS". UPSUD, CHALEARN, IDF, AND/OR OTHER ORGANIZERS AND SPONSORS DISCLAIM ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR ANY PARTICULAR PURPOSE, AND THE WARRANTY OF NON-INFRIGEMENT OF ANY THIRD PARTY'S INTELLECTUAL PROPERTY RIGHTS. IN NO EVENT SHALL ISABELLE GUYON AND/OR OTHER ORGANIZERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF SOFTWARE, DOCUMENTS, MATERIALS, PUBLICATIONS, OR INFORMATION MADE AVAILABLE FOR THE CHALLENGE. In case of dispute or possible exclusion/disqualification from the competition, the PARTICIPANTS agree not to take immediate legal action against the ORGANIZERS or SPONSORS. Decisions can be appealed by submitting a letter to the CHALEARN president, and disputes will be resolved by the CHALEARN board of directors. See contact information.
For questions of general interest, THE PARTICIPANTS should post their questions to the forum.
Other questions should be directed to the organizers.
Start: Nov. 21, 2019, 3:59 p.m.
Description: Please make submissions by clicking on following 'Submit' button. Then you can view the submission results of your algorithm on each dataset in corresponding tab (Dataset 1, Dataset 2, etc).
Color | Label | Description | Start |
---|---|---|---|
Dataset 1 | None | Nov. 21, 2019, 3:59 p.m. | |
Dataset 2 | None | Nov. 21, 2019, 3:59 p.m. | |
Dataset 3 | None | Nov. 21, 2019, 3:59 p.m. | |
Dataset 4 | None | Nov. 21, 2019, 3:59 p.m. | |
Dataset 5 | None | Nov. 21, 2019, 3:59 p.m. |
Jan. 8, 2020, 3:59 p.m.
You must be logged in to participate in competitions.
Sign In