Congratulations to the ECML PKDD AutoCV2 winners (see final phase result table at the bottom of the page):
This is AutoCV2: IMAGE + VIDEO Automated Computer Vision challenge, part of ECML PKDD 2019 competition program! This is a 2-phase challenge; after the feed-back phase (leaderboard shown here) we ran a final blind testing starting August 20. The LAST SUBMISSION of the participants ranked better than baseline2 in the feed-back phase were used in the final phase to determine the winners. This submission was tested on five new datasets and the training/testing was repeated 3 times to reduce computational time variance.
The winners will present at the ECML PKDD conference.
The IJCNN conf. AutoCV1 winners were [slides]:
The first AutoCV challenge had only Images. In AutoCV2 we have Images + Video, getting closer to full AutoDL yet: Despite recent successes of deep learning and other machine learning techniques, practical experience and expertise is still required to select models and/or choose hyper-parameters when applying techniques to new datasets. This problem is drawing increasing interest, yielding progress towards fully automated solutions. In this challenge your machine learning code is trained and tested on this platform, without human intervention whatsoever, on image or video classification tasks you have never seen before, with time and memory limitations. All problems are multi-label classification problems, coming from various domains including medical imaging, satellite imaging, object recognition, character recognition, face recognition, etc. They lend themselves to deep learning solutions, but other methods may be used. Raw data is provided, but formatted in a uniform manner, to encourage you to submit generic algorithms.
User
|
submissions
|
Dataset 1
|
Dataset 2
|
Dataset 3
|
Dataset 4
|
Dataset 5
|
<Rank>
|
Max 1
|
Max 2
|
Max 3
|
Max 4
|
Max 5
|
<Max Rank>
|
Final Max Rank
|
Final Rank
|
kakaobrain
|
6271 6385 6505 6655 6661 6667 6745 6751 6757
|
0.6277±0.0628 (8)
|
0.9048±0.0517 (5)
|
0.4076±0.0139 (8)
|
0.4640±0.0443 (2)
|
0.2091±0.0122 (3)
|
5.2
|
0.6963 (6)
|
0.9276 (1)
|
0.4206 (7)
|
0.5067 (2)
|
0.2217 (3)
|
3.8
|
1
|
1
|
tanglang
|
6619 6625 6631 6817 6823 6829 6835 6841 6847
|
0.6231±0.0449 (9)
|
0.8406±0.0461 (11)
|
0.4527±0.0270 (4)
|
0.3688±0.0260 (6)
|
0.2363±0.0130 (1)
|
6.2
|
0.6635 (8)
|
0.8772 (10)
|
0.4734 (3)
|
0.4105 (5)
|
0.2507 (1)
|
5.4
|
2
|
2
|
kvr
|
6283 6397 6517 6709 6715 6721 6799 6805 6811
|
0.6835±0.0299 (2)
|
0.9115±0.0150 (3)
|
0.4658±0.0083 (2)
|
-0.0417±0.0060 (16)
|
0.1627±0.0120 (8)
|
6.2
|
0.7174 (2)
|
0.9226 (2)
|
0.4778 (2)
|
-0.0289 (17)
|
0.1810 (7)
|
6
|
4
|
2
|
DXY0808
|
6373 6493 6607 6691 6697 6703 6781 6787 6793
|
0.6469±0.0268 (6)
|
0.8673±0.0094 (8)
|
0.3560±0.0158 (11)
|
0.3702±0.0235 (5)
|
0.2223±0.0159 (2)
|
6.4
|
0.6763 (7)
|
0.8800 (9)
|
0.3690 (11)
|
0.4056 (6)
|
0.2403 (2)
|
7
|
6
|
4
|
ether
|
6307 6427 6541 6673 6679 6685 6763 6769 6775
|
0.6756±0.0301 (5)
|
0.9086±0.0065 (4)
|
0.4700±0.0065 (1)
|
-0.0430±0.0088 (18)
|
0.1711±0.0153 (6)
|
6.8
|
0.7043 (4)
|
0.9181 (5)
|
0.4781 (1)
|
-0.0255 (15)
|
0.1979 (4)
|
5.8
|
3
|
5
|
Hana.Inst.Tech
|
6379 6499 6613
|
0.6815±0.0180 (3)
|
0.9194±0.0018 (1)
|
0.4640±0.0067 (3)
|
-0.0550±0.0082 (19)
|
0.1523±0.0087 (9)
|
7
|
0.7005 (5)
|
0.9213 (3)
|
0.4688 (5)
|
-0.0463 (19)
|
0.1623 (9)
|
8.2
|
8
|
6
|
myelinio
|
6301 6421 6535 6637 6643 6727 6733 6739 6853
|
0.6774±0.0370 (4)
|
0.8829±0.0657 (6)
|
0.4491±0.0255 (5)
|
-0.0420±0.0082 (17)
|
0.1724±0.0114 (5)
|
7.4
|
0.7114 (3)
|
0.9207 (4)
|
0.4702 (4)
|
-0.0276 (16)
|
0.1853 (6)
|
6.6
|
5
|
7
|
Letrain
|
6289 6403 6523
|
0.4684±0.0020 (14)
|
0.8406±0.0075 (10)
|
0.4030±0.0028 (9)
|
0.3972±0.0222 (3)
|
0.1865±0.0021 (4)
|
8
|
0.4707 (14)
|
0.8489 (12)
|
0.4047 (10)
|
0.4147 (4)
|
0.1878 (5)
|
9
|
9
|
8
|
team_zhaw
|
6277 6391 6511
|
0.5418±0.0340 (10)
|
0.8355±0.0915 (12)
|
0.4110±0.0072 (7)
|
0.3970±0.0298 (4)
|
0.1677±0.0052 (7)
|
8
|
0.5776 (10)
|
0.9006 (7)
|
0.4166 (8)
|
0.4178 (3)
|
0.1734 (8)
|
7.2
|
7
|
8
|
automl_freiburg
|
6295 6415 6529
|
0.1836±0.0022 (20)
|
0.9138±0.0021 (2)
|
0.4009±0.0079 (10)
|
0.5169±0.0404 (1)
|
0.1031±0.0070 (14)
|
9.4
|
0.1856 (20)
|
0.9158 (6)
|
0.4066 (9)
|
0.5494 (1)
|
0.1111 (13)
|
9.8
|
10
|
10
|
accheng
|
6331 6451 6565
|
0.6912±0.0366 (1)
|
0.7757±0.0466 (14)
|
0.4455±0.0080 (6)
|
-0.0075±0.0019 (14)
|
0.0543±0.0128 (17)
|
10.4
|
0.7331 (1)
|
0.8098 (13)
|
0.4547 (6)
|
-0.0053 (14)
|
0.0648 (17)
|
10.2
|
11
|
11
|
mmadadi
|
6361 6481 6595
|
0.6376±0.0149 (7)
|
0.8540±0.0040 (9)
|
0.2131±0.0039 (14)
|
0.2712±0.0276 (8)
|
0.0907±0.0052 (15)
|
10.6
|
0.6548 (9)
|
0.8582 (11)
|
0.2176 (14)
|
0.2970 (8)
|
0.0958 (15)
|
11.4
|
13
|
12
|
upwind_flys
|
6313 6433 6547
|
0.5220±0.0459 (11)
|
0.7933±0.0030 (13)
|
0.3523±0.0040 (12)
|
0.2862±0.0196 (7)
|
0.1358±0.0179 (11)
|
10.8
|
0.5630 (11)
|
0.7967 (14)
|
0.3567 (12)
|
0.3036 (7)
|
0.1519 (11)
|
11
|
12
|
13
|
brunosez
|
6343 6463 6577
|
0.4822±0.0094 (13)
|
0.7149±0.0600 (17)
|
0.2077±0.0050 (15)
|
0.1117±0.0305 (11)
|
0.1114±0.0080 (12)
|
13.6
|
0.4912 (13)
|
0.7506 (16)
|
0.2132 (15)
|
0.1357 (11)
|
0.1206 (12)
|
13.4
|
14
|
14
|
baseline2
|
6355 6475 6589
|
0.4514±0.0196 (16)
|
0.7297±0.0120 (16)
|
0.2199±0.0100 (13)
|
0.1388±0.0177 (10)
|
0.1055±0.0040 (13)
|
13.6
|
0.4633 (15)
|
0.7381 (17)
|
0.2306 (13)
|
0.1499 (10)
|
0.1094 (14)
|
13.8
|
15
|
14
|
OsbornArchibald
|
6319 6439 6553
|
0.4562±0.0042 (15)
|
0.8679±0.0112 (7)
|
0.0248±0.0067 (18)
|
-0.0363±0.0032 (15)
|
0.0393±0.0081 (18)
|
14.6
|
0.4597 (16)
|
0.8808 (8)
|
0.0296 (18)
|
-0.0340 (18)
|
0.0471 (18)
|
15.6
|
17
|
16
|
Pavao
|
6325 6445 6559
|
0.2293±0.0157 (19)
|
0.5047±0.0341 (19)
|
0.0598±0.0076 (17)
|
0.1688±0.0154 (9)
|
0.1389±0.0220 (10)
|
14.8
|
0.2474 (19)
|
0.5423 (19)
|
0.0673 (17)
|
0.1801 (9)
|
0.1622 (10)
|
14.8
|
16
|
17
|
bamboo_pandas
|
6337 6457 6571
|
0.2576±0.0160 (18)
|
0.7541±0.0243 (15)
|
-1.0000±0.0000 (20)
|
0.0015±0.0005 (13)
|
0.0680±0.0041 (16)
|
16.4
|
0.2691 (18)
|
0.7820 (15)
|
-1.0000 (20)
|
0.0021 (13)
|
0.0721 (16)
|
16.4
|
18
|
18
|
baseline1
|
6367 6487 6601
|
0.3098±0.0044 (17)
|
0.3590±0.0068 (20)
|
0.1349±0.0147 (16)
|
0.0038±0.0033 (12)
|
0.0177±0.0029 (19)
|
16.8
|
0.3148 (17)
|
0.3667 (20)
|
0.1471 (16)
|
0.0068 (12)
|
0.0198 (19)
|
16.8
|
19
|
19
|
chenweiwei-1
|
6349 6469 6583
|
0.5118±0.0337 (12)
|
0.6752±0.0128 (18)
|
-0.0156±0.0062 (19)
|
-1.0000±0.0000 (20)
|
-0.0116±0.0220 (20)
|
17.8
|
0.5456 (12)
|
0.6842 (18)
|
-0.0089 (19)
|
-1.0000 (20)
|
0.0101 (20)
|
17.8
|
20
|
20
|
This is a challenge with code submission. We provide 3 baseline methods for test purposes (Note: to avoid that tests take too long, we set in model.py self.num_epochs_we_want_to_train = 1; you may change that):
Baseline 0: Constant (zero) predictions
Baseline 1: Linear classifier
Baseline 2: 3D Convolutional Neural Network
To make a test submission, download one of the baseline methods, click on the blue button "Upload a Submission" in the upper right corner of the page and re-upload it. You must click first the orange tab "All datasets" if you want to make a submission simultaneously on all datasets and get ranked in the challenge. You may also submit on a single dataset at a time (for debug purposes). To check progress on your submissions go to the "My Submissions" tab. Your best submission is shown on the leaderboard visible under the "Results" tab.
The starting kit contains everything you need to create your own code submission (just by modifying the file model.py) and to test it on your local computer, with the same handling programs and Docker image as those of the Codalab platform (but the hardware environment is in general different).
This includes a jupyter notebook tutorial.ipynb with step-by-step instructions. The interface is simple and generic: you must supply a Python class model.py with:
To make submissions, zip model.py (without the directory), then use the "Upload a Submission" button. That's it!
Since for one dataset a submission may take up to 20 minutes and there are 5 datasets, if you do not stop your model early, you will only be able to make 3 full submissions (on all datasets) per day: 3 times x 5 datasets x (1/3 h)/dataset ~ 5h. However, you may manage your time in a more effective way by stopping your models early. This is done by setting the "done_training" attribute to "True" once you are done training, e.g. after a certain number of epochs.
The starting kit contains sample data, but you may want to develop your code with larger practice datasets. We provide 8 public datasets for this purpose. You will have access to the data (training set and test set) AND the true labels for these datasets. Notice that the video datasets do not include a sound track.
# | Name | Type | Domain | Size | Source |
Data (w/o test labels) |
Test labels |
1 | Munster | Image | HWR | 18 MB | MNIST | munster.data | munster.solution |
2 | Chucky | Image | Objects | 128 MB | Cifar-100 | chucky.data | chucky.solution |
3 | Pedro | Image | People | 377 MB | PA-100K | pedro.data | pedro.solution |
4 | Decal | Image | Aerial | 73 MB | NWPU VHR-10 | decal.data | decal.solution |
5 | Hammer | Image | Medical | 111 MB | Ham10000 | hammer.data | hammer.solution |
6 | Kraut | Video | Action | 1.9 GB | KTH | kraut.data | kraut.solution |
7 | Katze | Video | Action | 1.9 GB | KTH | katze.data | katze.solution |
8 | Kreatur | Video | Action | 469 MB | KTH | kreatur.data | kreatur.solution |
# |
Name |
num_train |
num_test |
sequence_size |
row_count |
col_count |
num_channels |
output_dim | |
1 | Munster | 60000 | 10000 | 1 | 28 | 28 | 1 | 10 | |
2 | Chucky | 48061 | 11939 | 1 | 32 | 32 | 3 | 100 | |
3 | Pedro | 80095 | 19905 | 1 | -1 | -1 | 3 | 26 | |
4 | Decal | 634 | 166 | 1 | -1 | -1 | 3 | 11 | |
5 | Hammer | 8050 | 1965 | 1 | 400 | 300 | 3 | 7 | |
6 | Kraut | 1528 | 863 | 181 | 120 | 160 | 1 | 4 | |
7 | Katze | 1528 | 863 | 181 | 120 | 160 | 1 | 6 | |
8 | Kreatur | 1528 | 863 | 181 | 60 | 80 | 3 | 4 |
These data were re-formatted from original public datasets. If you use them, please make sure to acknowledge the original data donnors (see "Source" in the data table) and check the tems of use.
To download all public datasets at once:
cd autodl_starting_kit_stable
python download_public_datasets.py
Raw data are preserved, but formatted in a generic data format based on TFRecords, used by TensorFlow. However, this will not impose to participants to use deep learning algorithms nor even Tensorflow. If you want to practice designing algorithms with your own datasets, follow these steps.
This challenge has two phases. This is the feed-back phase: when you submit your code, you get immediate feed-back on five development datasets. In the final test phase, you will be evaluated on five new datasets. Eligible participants to the final phase will be notified when and where to submit their code for a final blind test on these five new datasets. The ranking in the final phase will count towards determining the winners.
Code submitted is trained and tested automatically, without any human intervention. Code submitted on "All datasets" is run on all five development datasets in parallel on separate compute workers, each one with its own time budget.
The identities of the datasets used for testing on the platform are concealed. The data are provided in a raw form (no feature extraction) to encourage researchers to use Deep Learning methods performing automatic feature learning, although this is NOT a requirement. All problems are multi-label classification problems. The tasks are constrained by the time budget (20 minutes/dataset).
Here is some pseudo-code of the evaluation protocol:
# For each dataset, our evaluation program calls the model constructor:
M = Model(metadata=dataset_metadata)
# Initialize
remaining_time budget = overall_time_budget
start_time = time()
# Ingestion program calls multiple times train and test:
repeat until M.done_training or remaining_time_budget < 0
{
M.train (training_data, remaining_time_budget)
remaining_time_budget = start_time + overall_time_budget - time.time()
results = M.test(test_data, remaining_time_budget)
remaining_time_budget = start_time + overall_time_budget - time.time()
# Results made available to scoring program (run in separate container)
save(results)
}
It is the responsibility of the participants to make sure that neither the "train" nor the "test" methods exceed the “remaining_time_budget”. The method “train” can choose to manage its time budget such that it trains in varying time increments. There is pressure that it does not use all "overall_time_budget" at the first iteration because we use the area under the learning curve as metric.
The participants can train in batches of pre-defined duration to incrementally improve their performance, until the time limit is attained. In this way we can plot learning curves: "performance" as a function of time. Each time the "train" method terminates, the "test" method is called and the results are saved, so the scoring program can use them, together with their timestamp.
We treat both multi-class and multi-label problems alike. Each label/class is considered a separate binary classification problem, and we compute the normalized AUC (or Gini coefficient)
2 * AUC - 1
as score for each prediction, here AUC is the usual area under ROC curve (ROC AUC).
For each dataset, we compute the Area under Learning Curve (ALC). The learning curve is drawn as follows:
After we compute the ALC for all 5 datasets, the overall ranking is used as the final score for evaluation and will be used in the learderboard. It is computed by averaging the ranks (among all participants) of ALC obtained on the 5 datasets.
Examples of learning curves:
No, they can make entries that show on the leaderboard for test purposes and to stimulate participation, but they are excluded from winning prizes. Excluded entrants include: baseline0, baseline1, baseline2, baiyu, eric, hugo.jair, juliojj, Lukasz, madclam, Pavao, shangeth, thomas, tthomas, Zhen, Zhengying.
No, except accepting the TERMS AND CONDITIONS.
Yes, until the challenge deadline.
You can download "public data" only from the Instructions page. The data on which your code is evaluated cannot be downloaded, it will be visible to your code only, on the Codalab platform.
To make a valid challenge entry, make sure to click first the orange button "All datasets", then click the blue button on the upper right side "Upload a Submission". This will ensure that you submit on all 5 datasets of the challenge simultaneously. You may also make a submission on a single dataset for debug purposes, but it will not count towards the final ranking.
We provide a Starting Kit in Python with step-by-step instructions in a Jupyter notebook called "tutorial.ipynb", which can be found in the github repository https://github.com/zhengying-liu/autodl_starting_kit_stable. You can also have a well rendered preview here.
Yes. Top ranking participants will be invited to submit papers to a special issue of the IEEE transaction journal PAMI on Automated Machine Learning and will be entered in a contest for the best paper. Deadline November 30, 2019.
There will be 2 best paper awards of $1000 ("best paper" and "best student paper").
Yes, a 4000 USD prize pool.
1st place |
2nd place |
3rd place |
|
Prize |
2000 USD |
1500 USD |
500 USD |
Yes, participation is by code submission.
No. You just grant to the ORGANIZERS a license to use your code for evaluation purposes during the challenge. You retain all other rights.
Yes, please download it [HERE].
We are running your submissions on Google Cloud NVIDIA Tesla P100 GPUs. In non peak times we are planning to use 10 workers, each of which will have one NVIDIA Tesla P100 GPU (running CUDA 10 with drivers cuDNN 7.5) and 4 vCPUs, with 26 GB of memory, 100 GB disk.
The PARTICIPANTS will be informed if the computational resources increase. They will NOT decrease.
This is not explicitly forbidden, but it is discouraged. We prefer if all calculations are performed on the server. If you submit a pre-trained model, you will have to disclose it in the fact sheets.
YES. The ranking of participants will be made from a final blind test made by evaluating a SINGLE SUBMISSION made on the final test submission site. The submission will be evaluated on five new datasets in a completely "blind testing" manner. The final test ranking will determine the winners.
Each execution must run in less than 20 minutes (1200 seconds) for each dataset. Your cumulative time is limited to 5 hours per day in total.
Wall time.
In principle no more than its time budget. We kill the process if the time budget is exceeded. Submissions are queued and run on a first time first serve basis. We are using several identical servers. Contact us if your submission is stuck more than 24 hours. Check on the leaderboard the execution time.
Five per day (and up to a total of 100), but up to a total computational time of 5 hours (submissions taking longer will be aborted). This may be subjet to change, according to the number of participants. Please respect other users. It is forbidden to register under multiple user IDs to gain an advantage and make more submissions. Violators will be DISQUALIFIED FROM THE CONTEST.
No. Please contact us if you think the failure is due to the platform rather than to your code and we will try to resolve the problem promptly.
The submission evaluation logic is implemented such that most errors coming from executing your model.py are catched by ingestion program. We made this choice to always (hopefully) terminate the evaluation process within the scope of ingestion program, independent of the CodaLab platform.
To find the error message, you can go to "My Submissions" -> "Dataset 2" (for example) -> Click "+" button of your corresponding submission -> "Output Log" of "Ingestion Step".
This should be avoided. In the case where a submission exceeds 20 minutes of time budget for a particular task (dataset), the submission handling process (ingestion program in particular) will be killed when time budget is used up and predictions made so far (with their corresponding timestamps) will be used for evaluation. In the other case where a submission exceeds the total compute time per day, all running tasks will be killed by CodaLab and the status will be marked 'Failed' and a score of -1.0 will be produced.
No sorry, not for this challenge.
All problems are multi-label problems and we treat them as multiple 2-class classification problems. For a given dataset, all binary classification problems are scored with the ROC AUC and results are averaged (over all classes/binary problems). For each time step at which you save results, this gives you one point on the learning curve. The final score for one dataset is the area under the learning curve. The overall score on all 5 datasets is the average rank on the 5 datasets. For more details, go to 'Get Started' -> 'Instructions' -> 'Metrics' section.
The code was tested under Python 3.5. We are running Python 3.5 on the server and the same libraries are available.
Yes. Any Linux executable can run on the system, provided that it fulfills our Python interface and you bundle all necessary libraries with your submission.
No. We use TFRecords to format the datasets in a uniform manner, but you can use other software to process the data, including PyTorch (included in the Docker, see the following question).
evariste/autodl:gpu, see the Dockerfile and some instructions on dockerhub.
When you submit code to Codalab, your code is executed inside a Docker container. This environment can be exactly reproduced on your local machine by downloading the corresponding docker image. The docker environment of the challenge contains Anaconda libraries, TensorFlow, and PyTorch (among other things).
Non GPU users, if you are new to Docker, follow these instructions to install docker. You may then use the docker evariste/autodl:cpu. See details in the Starting Kit that can be downloaded from the Instructions page. GPU users, follow these more detailed instructions.
Your last submission is shown automatically on the leaderboard. You cannot choose which submission to select. If you want another submission than the last one you submitted to "count" and be displayed on the leaderboard, you need to re-submit it.
No. If you accidentally register multiple times or have multiple accounts from members of the same team, please notify the ORGANIZERS. Teams or solo PARTICIPANTS with multiple accounts will be disqualified.
We have disabled Codalab team registration. To join as a team, just share one account with your team. The team leader is responsible for making submissions and observing the rules.
You cannot. If you need to destroy your team, contact us.
It is up to you and the team leader to make arrangements. However, you cannot participate in multiple teams.
No. If we discover that you are trying to cheat in this way you will be disqualified. All your actions are logged and your code will be examined if you win.
ALL INFORMATION, SOFTWARE, DOCUMENTATION, AND DATA ARE PROVIDED "AS-IS". UPSUD, CHALEARN, IDF, AND/OR OTHER ORGANIZERS AND SPONSORS DISCLAIM ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR ANY PARTICULAR PURPOSE, AND THE WARRANTY OF NON-INFRIGEMENT OF ANY THIRD PARTY'S INTELLECTUAL PROPERTY RIGHTS. IN NO EVENT SHALL ISABELLE GUYON AND/OR OTHER ORGANIZERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF SOFTWARE, DOCUMENTS, MATERIALS, PUBLICATIONS, OR INFORMATION MADE AVAILABLE FOR THE CHALLENGE. In case of dispute or possible exclusion/disqualification from the competition, the PARTICIPANTS agree not to take immediate legal action against the ORGANIZERS or SPONSORS. Decisions can be appealed by submitting a letter to the CHALEARN president, and disputes will be resolved by the CHALEARN board of directors. See contact information.
For questions of general interest, THE PARTICIPANTS should post their questions to the forum.
Other questions should be directed to the organizers.
This challenge would not have been possible without the help of many people.
Main organizers:
Other contributors to the organization, starting kit, and datasets, include:
The challenge is running on the Codalab platform, administered by Université Paris-Saclay and maintained by CKCollab LLC, with primary developers:
ChaLearn is the challenge organization coordinator. Google is the primary sponsor of the challenge. 4Paradigm donated prizes. Other institutions of the co-organizers provided in-kind contributions.
Start: July 2, 2019, midnight
Description: Please make submissions by clicking on following 'Submit' button. Then you can view the submission results of your algorithm on each dataset in corresponding tab (Dataset 1, Dataset 2, etc).
Color | Label | Description | Start |
---|---|---|---|
Dataset 1 | This tab contains submission results of your algorithm on Dataset 1. | July 2, 2019, midnight | |
Dataset 2 | This tab contains submission results of your algorithm on Dataset 2. | July 2, 2019, midnight | |
Dataset 3 | This tab contains submission results of your algorithm on Dataset 3. | July 2, 2019, midnight | |
Dataset 4 | This tab contains submission results of your algorithm on Dataset 4. | July 2, 2019, midnight | |
Dataset 5 | This tab contains submission results of your algorithm on Dataset 5. | July 2, 2019, midnight |
Aug. 20, 2019, midnight
You must be logged in to participate in competitions.
Sign In