0 minutes to go
  • 27 Participants
  • 439 Submissions
  • Competition Ends: Oct. 29, 2019, 3:59 p.m.
  • Server Time: 11:23 p.m. UTC

Automated Weakly Supervised Learning

Asian Conference on Machine Learning (ACML) 2019 will be held in WINC AICHI, Nagoya, Japan from November 17 to 19, 2019. AutoWSL is one of the competitions in main conference provided by 4Paradigm, ChaLearn, RIKEN and Microsft and co-organized with WSL-workshop@ACML 2019.

UC terminates subscriptions with world’s largest scientific publisher in the push for open access to publicly funded research, since “Knowledge should not be accessible only to those who can pay,” said Robert May, chair of UC’s faculty Academic Senate. Similarly, machine learning should not be accessible only to those who can pay. Specifically, modern machine learning is migrating to the era of complex models (e.g., deep neural networks), which require a plethora of well-annotated data. Large companies have enough money to collect well-annotated data. However, for startups or non-profit organizations, such data is barely acquirable due to the cost of labeling data (e.g., by crowd-sourcing platforms). Besides, well-annotated may not exist due to the natural scarcity in the given domain (e.g., Alzheimer's disease or earthquake prediction). Weakly-supervised learning (WSL) is defined as the collection of machine learning problem settings and algorithms that share the same goals as supervised learning but can only access to less supervised information than supervised learning. The above facts and practical issues motivate us to research and pay attention to weakly-supervised learning (WSL) since WSL does not require such a massive amount of annotated data. What's more, traditional WSL methods have too many hyperparameters to tune depending on the problems, which requires too much human effort to deploy a WSL method successfully. 

Here, we propose the first AutoWSL competition which aims at proposing automated solutions for WSL-related tasks. This challenge is restricted to binary classification problems, which come from different application domains. 3 practice datasets are provided to the participants for developing AutoWSL solutions. Afterward, solutions will be evaluated with 18 unseen feedback and final private datasets without human intervention. The results of these datasets determine the final ranking.

In AutoWSL competition, we will focus on three popular tasks in WSL, i.e., semi-supervised learning (some samples are unlabeled), positive unlabeled learning (samples are only positive or unlabeled, no negative samples), and learning from noisy labels (all samples are labeled, but some labels can be wrong). These are three disjoint tasks, and they will NOT simultaneously appear in a single data set. There will be auxiliary information helping participants identify which task they need to perform on each data set.

AutoWSL will pose new challenges to the participants, as listed below:
- How to automatically deal with various kinds of WSL tasks?
- How to automatically extract useful features for different tasks?
- How to automatically handle different amounts of supervised information?
- How to automatically design effective learning models to deal with various structured data?

Additionally, participants should also consider:
- How to automatically and efficiently select proper hyper-parameters?
- How to make the solution more generic, i.e., how to make it applicable for unseen tasks?
- How to keep the computational and memory cost acceptable?  
 

Visit "Get Started" and follow the steps to participate.
 

Timeline 

Beijing Time (UTC+8)

  • Sep 24th, 2019, 23:59: Beginning of the feedback Phase, the release of practice datasets. Participants can start submitting codes and obtaining immediate feedback in the leaderboard.
  • Oct 21st, 2019, 23:59: Real Personal Identification
  • Oct 24th, 2019, 23:59: End of Real Personal Identification
  • Oct 29th, 2019, 23:59: End of the feedback Phase.
  • Oct 30th, 2019, 00:00: Beginning of the check Phase.
  • Nov 02nd, 2019, 19:59: End of the check Phase.
  • Nov 02nd, 2019, 20:00: Beginning of the final Phase.
  • Nov 04th, 2019, 20:00: Re-submission deadline.
  • Nov 06th, 2019, 20:00: End of the final Phase.

Note that the CodaLab platform uses UTC time format, please pay attention to the time descriptions elsewhere on this page so as not to mistake the time points for each phase of the competition.

 

Prizes

  • 1st  Place: $2000
  • 2nd Place: $1500
  • 3rd Place: $500
 

About

Please contact the organizers if you have any problem concerning this challenge.

Sponsors

4paradigm chaleanImage result for microsoft logo

Advisors

- Wei-Wei Tu, 4Pardigm Inc., China, (Coordinator, Platform Administrator, Data Provider, Baseline Provider, Sponsor) tuweiwei@4paradigm.com

- Isabelle Guyon, Universté Paris-Saclay, France, ChaLearn, USA, (Advisor, Platform Administrator) guyon@chalearn.org

- Qiang Yang, Hong Kong University of Science and Technology, Hong Kong, China, (Advisor, Sponsor) qyang@cse.ust.hk

Committee (alphabetical order)

- Bo Han, RIKEN-AIP, Japan, (Admin) bo.han@riken.jp

- Hai Wang, 4Paradigm Inc., China, (Dataset provider, baseline) wanghai@4paradigm.com

- Ling Yue, 4Paradigm Inc., China, (Admin) yueling@4paradigm.com

- Quanming Yao, 4Paradigm Inc., China, (Admin) yaoquanming@4paradigm.com

- Shouxiang Liu, 4Paradigm Inc., China, (Admin) liushouxiang@4paradigm.com

- Xiawei Guo, 4Paradigm Inc., China, (Admin) guoxiawei@4paradigm.com

- Zhengying Liu, U. Paris-Saclay; U. PSud, France, (Platform Provider) zhengying.liu@inria.fr

- Zhen Xu, 4Paradigm Inc., China, (Admin) xuzhen@4paradigm.com

 

Organization Institutes

4paradigm chaleanriken

About AutoML 

Previous AutoML Challenges:

First AutoML Challenge

AutoML@PAKDD2018

AutoML@NeurIPS2018

- AutoML@PAKDD2019

- AutoML@KDDCUP2019

- AutoCV@IJCNN2019

- AutoCV2@ECML PKDD2019

- AutoNLP@WAIC2019

- AutoSpeech@ACML 2019

 

About 4Paradigm Inc.

Founded in early 2015, 4Paradigm is one of the world’s leading AI technology and service providers for industrial applications. 4Paradigm’s flagship product – the AI Prophet – is an AI development platform that enables enterprises to effortlessly build their own AI applications, and thereby significantly increase their operation’s efficiency. Using the AI Prophet, a company can develop a data-driven “AI Core System”, which could be largely regarded as a second core system next to the traditional transaction-oriented Core Banking System (IBM Mainframe) often found in banks. Beyond this, 4Paradigm has also successfully developed more than 100 AI solutions for use in various settings such as finance, telecommunication and internet applications. These solutions include, but are not limited to, smart pricing, real-time anti-fraud systems, precision marketing, personalized recommendation and more. And while it is clear that 4Paradigm can completely set up a new paradigm that an organization uses its data, its scope of services does not stop there. 4Paradigm uses state-of-the-art machine learning technologies and practical experiences to bring together a team of experts ranging from scientists to architects. This team has successfully built China’s largest machine learning system and the world’s first commercial deep learning system. However, 4Paradigm’s success does not stop there. With its core team pioneering the research of “Transfer Learning,” 4Paradigm takes the lead in this area, and as a result, has drawn great attention of worldwide tech giants. 

About ChaLearn

ChaLearn is a non-profit organization with vast experience in the organization of academic challenges. ChaLearn is interested in all aspects of challenge organization, including data gathering procedures, evaluation protocols, novel challenge scenarios (e.g., competitions), training for challenge organizers, challenge analytics, result dissemination and, ultimately, advancing the state-of-the-art through challenges.

About RIKEN

RIKEN is a large scientific research institute in Japan. Founded in 1917, it now has about 3,000 scientists on seven campuses across Japan, including the main site at Wakō, Saitama Prefecture, just outside Tokyo. Riken is a Designated National Research and Development Institute, and was formerly an Independent Administrative Institution. "Riken" is a contraction of the formal name Rikagaku Kenkyūjo, and its full name in Japanese is Kokuritsu Kenkyū Kaihatsu Hōjin Rikagaku Kenkyūsho and in English is the National Institute of Physical and Chemical Research.

About Microsoft

Microsoft Corporation is an American multinational technology company with headquarters in Redmond, Washington. It develops, manufactures, licenses, supports, and sells computer software, consumer electronics, personal computers, and related services. Its best known software products are the Microsoft Windows line of operating systems, the Microsoft Office suite, and the Internet Explorer and Edge Web browsers. Its flagship hardware products are the Xbox video game consoles and the Microsoft Surface lineup of touchscreen personal computers. As of 2016, it is the world's largest software maker by revenue, and one of the world's most valuable companies. The word "Microsoft" is a portmanteau of "microcomputer" and "software". Microsoft is ranked No. 30 in the 2018 Fortune 500 rankings of the largest United States corporations by total revenue. (Credits: Wikipedia) 

 

Quick start

Baseline, Starting kit, Practice datasets can be downloaded here:

 

Baseline

This is a challenge with code submission. We provide one baseline above for test purposes.

To make a test submission, download the starting kit and follow the readme.md file instruction. click on the blue button "Upload a Submission" in the upper right corner of the page and re-upload it. You must click first the orange tab "Feedback Phase" if you want to make a submission simultaneously on all datasets and get ranked in the challenge. You may also submit on a single dataset at a time (for debug purposes). To check progress on your submissions goes to the "My Submissions" tab. Your best submission is shown on the leaderboard visible under the "Results" tab.

Starting kit

The starting kit contains everything you need to create your own code submission (just by modifying the file model.py) and to test it on your local computer, with the same handling programs and Docker image as those of the Codalab platform (but the hardware environment is in general different). 

The starting kit contains toy sample data. Besides that, 3 practice datasets are also provided so that you can develop your AutoWSL solutions offline. These 3 practice datasets can be downloaded from the link at the beginning.

Local development and testing:

You can test your code in the exact same environment as the Codalab environment using docker. You are able to run the ingestion program (to produce predictions) and the scoring program (to evaluate your predictions) on toy sample data.

1. If you are new to docker, install docker from https://docs.docker.com/get-started/.

2. At the shell, change to the starting-kit directory, run

 docker run -it -v "$(pwd):/app/codalab" vergilgxw/autotable:v2

3. Now you are in the bash of the docker container, run the local test program

  python run_local_test.py --dataset_dir=[path_to_dataset] --code_dir=[path_to_model_file]

It runs ingestion and scoring program simultaneously, and the predictions and scoring results are in sample_result_submissions and scoring_output directory. 

Submission

Interface

The interface is simple and generic: you must supply a Python model.py, where a Model class is defined with:

  • a constructor
  • a train method
  • a predict method
  • a save method
  • a load method

The python version on our platform is 3.6.9. Below we define the interface of Model class in detail.

__init__(self, info):

  • info: a python dictionary contains the meta information about the dataset, described in "Get Started - Dataset" page. 

train(self, X_train, y, time_remain):

  • X_train: a dataframe, input tabular data.
  • y: the label of the training data in integer format, 1:positive, -1:negative, 0:unlabeled
  • time_remain: a scalar that indicates the remaining time (sec) for the submission process.

predict(self, X_test, time_remain):  

  • X_test: a dataframe, input tabular data.
  • time_remain: a scalar that indicates the remaining time (sec) for the submission process.

save(self, directory):

  • directory: directory path to store model

load(self, directory):

  • directory: directory path to restore model

To make submissions, zip model.py and its dependency files (without the directory), then use the "Upload a Submission" button. Please note that you must click first the orange tab "Feedback Phase / Final Phase" if you want to make a submission simultaneously on all datasets and get ranked in the challenge. You may also submit on a single dataset at a time (for debug purposes). Besides that, the ranking in the public leaderboard is determined by the LAST code submission of the participants.

Computational limitations

  • A submission on one dataset is limited to certain time budgets, which can be read from the info.json file of dataset. 'time_budget' is time budget for training and 'pred_time_budget' is time budget for testing.
  • Participants are limited to 5 submissions per day per dataset. However, we do not encourage to make a submission on a single dataset at a time, and the submission on all datasets will be failed if the number of submissions of one dataset is exhausted.

Running Environment

In the starting-kit, we provide a docker that simulates the running environment of our challenge platform. Participants can check the python version and installed python packages with the following commands:

 python --version

 pip list

On our platform, for each submission, the allocated computational resources are:

  • CPU: 4 Cores
  • Memory: 16 GB

Challenge Rules

  • General Terms: This challenge is governed by the General ChaLearn Contest Rule Terms, the Codalab Terms and Conditions, and the specific rules set forth.
  • Announcements: To receive announcements and be informed of any change in rules, the participants must provide a valid email.
  • Conditions of participation: Participation requires complying with the rules of the challenge. Prize eligibility is restricted by Chinese government export regulations, see the General ChaLearn Contest Rule Terms. The organizers, sponsors, their students, close family members (parents, sibling, spouse or children) and household members, as well as any person having had access to the truth values or to any information about the data or the challenge design giving him (or her) an unfair advantage, are excluded from participation. A disqualified person may submit one or several entries in the challenge and request to have them evaluated, provided that they notify the organizers of their conflict of interest. If a disqualified person submits an entry, this entry will not be part of the final ranking and does not qualify for prizes. The participants should be aware that ChaLearn and the organizers reserve the right to evaluate for scientific purposes any entry made in the challenge, whether or not it qualifies for prizes.
  • Dissemination: The challenge is orgnized in conjunction with the ACML 2019 conference, Workshop on Weakly Supversied Learning. Top ranking participants will be invited to submit a paper to a special issue on Automated Machine learning of the IEEE transactions PAMI.
  • Registration: The participants must register to Codalab and provide a valid email address. Teams must register only once and provide a group email, which is forwarded to all team members. Teams or solo participants registering multiple times to gain an advantage in the competition may be disqualified. One participant can only be registered in one team. Note that you can join the challenge until one week before the end of feedback phase. Real personal identifications will be required (notified by organizers) at the end of feedback phase to avoid deplicate accounts and award claim.
  • Anonymity: The participants who do not present their results at the challenge session can elect to remain anonymous by using a pseudonym. Their results will be published on the leaderboard under that pseudonym, and their real name will remain confidential. However, the participants must disclose their real identity to the organizers to join check phase and final phase and to claim any prize they might win. See our privacy policy for details.
  • Submission method: The results must be submitted through this CodaLab competition site. The number of submissions per day is 5. Using multiple accounts to increase the number of submissions in NOT permitted. In case of problem, send email to autowsl2019@4paradigm.com. The entries must be formatted as specified on the Instructions page.
  • Prizes: The top ranking participants in the Final Phase may qualify for prizes. To compete for prizes, the participants must make a valid submission on the Final Phase website (TBA), and fill out a fact sheet briefly describing their methods before the announcement of the final winners. There is no other publication requirement. The winners will be required to make their code publicly available under an OSI-approved license such as, for instance, Apache 2.0, MIT or BSD-like license, if they accept their prize, within a week of the deadline for submitting the final results. Entries exceeding the time budget will not qualify for prizes. In case of a tie, the prize will go to the participant who submitted his/her entry first. Non winners or entrants who decline their prize retain all their rights on their entries and are not obliged to publicly release their code.
  • Cheating: We forbid people during any phase to attempt to get a hold of the solution labels on the server (though this may be technically feasible). Generally, participants caught cheating in any sense will be disqualified. If you have questions concerning specific methods, please contact organizers by autowsl2019@4paradigm.com

Dataset

This page describes the datasets used in AutoWSL challenge. 39 datasets are prepared for this competition. 3 practice datasets, which can be downloaded, are provided to the participants so that they can develop their AutoWSL solutions offline. Besides that, another 18 validation datasets are also provided to participants to evaluate the public leaderboard scores of their AutoWSL solutions. Afterward, their solutions will be evaluated with 18 test datasets without human intervention. 
 

Description

This challenge is restricted to binary classification problems, which come from different application domains. we will focus on three popular tasks in WSL, i.e., semi-supervised learning (some samples are unlabeled), positive unlabeled learning (samples are only positive or unlabeled, no negative samples), and learning from noisy labels (all samples are labeled, but some labels can be wrong). These are three disjoint tasks, and they will not simultaneously appear in a single data set.
 
 

Data

Components

Each dataset is split into two subsets, namely the training set and the testing set.

Both sets have:
• a main data file that stores the main table (label excluded);
• an info dictionary that contains important information about the dataset, including feature type;
• The training set has an additional label file that stores labels associated with the training data.

Table files

Each table file is a CSV file that stores a table (main or related), with '\t' as the delimiter. The first row indicates the names of features, and the following rows are the records.
The type of each feature can be found in the info dictionary that will be introduced soon.
There are 4 types of features, indicated by "cat", "num", "multi-cat", and "time", respectively:
• cat: categorical feature, an integer
• num: numerical Feature, a real value.
• multi-cat: multi-value categorical Feature: a set of integers, split by the comma. The size of the set is not fixed and can be different for each instance. For example, topics of an article, words in a title, items bought by a user and so on.
• time: time feature, an integer that indicates the timestamp.
Note: Categorical/Multi-value Categorical features with a large number of values that follows a power law might be included.

Label file

The label file is associated only with the main table in the training set. It is a CSV document that contains exactly one column, with the first row as the header and the remaining indicating labels of corresponding instances in the main table. We use 1, 0, and -1 to indicate positive, unlabeled, and negative examples respectively.

Info dictionary

Important information about each dataset is stored in a python dictionary structure named as info.json, which acts as an input of the participants' AutoML solutions. For public datasets, we will provide an info.json file that contains the dictionary. Here we give details about info.
• task : 'pu' or 'ssl' or 'noisy'. (representing positive and unlabeled learning, semi-supervised learning and noisy label task)
• time_budget : the time budget of the dataset.
• start_time: DEPRECATED.
• schema : a dictionary that stores information about table. Each key indicatesthe name of feature, and its corresponding value is a dictionary that indicates the type of each column in this table.

  • Example:

    {
      "task": "pu",
      "time_budget": 500,
      "start_time": 10550933,
      "schema": {
      "c_1": "cat",
      "n_1": "num",
      "t_1": "time"
      }
    
    }
 

Dataset Credits

We thank the following recourses for providing us with excellent datasets:
  • Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.
 
 
 

Evaluation

Competition protocol

This challenge has three phases. The participants are provided with practice datasets which can be downloaded, so that they can develop their AutoWSL solutions offline. Then, the code will be uploaded to the platform and participants will receive immediate feedback on the performance of their method at another eighteen validation datasets. After feedback phase terminates, we will have another check phase, where participants are allowed to submit their code only once on private datasets in order to debug. Participants won't be able to read detailed logs but they are able to see whether their code report errors. Last, in the Final Phase, Participants’ solutions will be evaluated on eighteen test datasets. The ranking in the final phase will count towards determining the winners.

Code submitted is trained and tested automatically, without any human intervention. Code submitted on feedback (resp. final) phase is run on all 18 feedback (resp. final) datasets in parallel on separate compute workers, each one with its own time budget.

The identities of the datasets used for testing on the platform are concealed. The data are provided in a raw form (no feature extraction) to encourage researchers to performe automatic feature learning. All problems are binary classification problems. The tasks are constrained by the time budget (provided in the metafile of datasets).

Here is some pseudo-code of the evaluation protocol:

# For each dataset, our evaluation program calls the model constructor:

from model import Model
M = Model(metadata=dataset_metadata)

with timer.time_limit('training'):
    M.train(train_dataset, train_label)
    M.save(temp_dir)

M = Model(metadata=dataset_metadata)

with timer.time_limit('predicting'):
    M.load(temp_dir)
    y_pred = M.predict(test_dataset)

It is the responsibility of the participants to make sure that neither the "train" nor the "test" methods exceed the time budget.

Metrics

For each dataset, we compute ROC AUC as the evaluation for this dataset. Participants will be ranked according to AUC per dataset. After we compute the AUC for all 18 datasets, the overall ranking is used as the final score for evaluation and will be used in the leaderboard. It is computed by averaging the ranks (among all participants) of AUC obtained on the 18 datasets.

 

FAQs

Can organizers compete in the challenge?

No, they can make entries that show on the leaderboard for test purposes and to stimulate participation, but they are excluded from winning prizes.

Are there prerequisites to enter the challenge?

No, except accepting the TERMS AND CONDITIONS.

Can I enter any time?

No, you can join the challenge until one week before the end of feedback phase. After that, we will require real personal identification (notified by organizers) to avoid duplicate accounts.

Where can I download the data?

You can download "practice datasets" only from the Instructions page. The data on which your code is evaluated cannot be downloaded, it will be visible to your code only, on the Codalab platform.

How do I make submissions?

To make a valid challenge entry, click the blue button on the upper right side "Upload a Submission". This will ensure that you submit on all  datasets of the challenge simultaneously. You may also make a submission on a single dataset for debug purposes, but it will not count towards the final ranking.

Do you provide tips on how to get started?

We provide a Starting Kit in Python with step-by-step instructions in "README.md".

Are there publication opportunities?

Yes. Top ranking participants will be invited to submit papers to a special issue of the IEEE transaction journal PAMI on Automated Machine Learning and will be entered in a contest for the best paper. Deadline March 15th 2020.

There will be 2 best paper awards of $1000 ("best paper" and "best student paper").

Are there prizes?

Yes, a $4000 prize pool.

  1st place 2nd place 3rd place
Prize $2000 $1500 $500

Do I need to submit code to participate?

Yes, participation is by code submission.

When I submit code, do I surrender all rights to that code to the SPONSORS or ORGANIZERS?

No. You just grant to the ORGANIZERS a license to use your code for evaluation purposes. You retain all other rights.

If I win, I must submit a fact sheet, do you have a template?

Yes, we will provide the fact sheet in a suitable time.

What is your CPU/GPU computational configuration?

On our platform, for each submission, allocated computational resource is:

  • CPU: 4 cores
  • Memory: 16GB
  • No GPU provided

The PARTICIPANTS will be informed if the computational resources increase. They will NOT decrease.

Will there be a final test round on separate datasets?

YES. The ranking of participants will be made from a final blind test made by evaluating a SINGLE SUBMISSION made on the final test submission site. The submission will be evaluated on five new test datasets in a completely "blind testing" manner. The final test ranking will determine the winners.

What is my time budget?

Each execution must run in its own time budget for each dataset (provided in the metafile info.json time_budget and pred_time_budget for training and testing).

Does the time budget correspond to wall time or CPU time?

Wall time.

My submission seems stuck, how long will it run?

In principle no more than its time budget. We kill the process if the time budget is exceeded. Submissions are queued and run on a first time first serve basis. We are using several identical servers. Contact us if your submission is stuck for more than 24 hours. Check on the leaderboard the execution time.

How many submissions can I make?

5 times per day. This may be subject to change, according to the number of participants. Please respect other users. It is forbidden to register under multiple user IDs to gain an advantage and make more submissions. Violators will be DISQUALIFIED FROM THE CONTEST.

Do my failed submissions count towards my number of submissions per day?

Yes. Please contact us if you think the failure is due to the platform rather than to your code and we will try to resolve the problem promptly.

What happens if I exceed my time budget?

This should be avoided. In the case where a submission exceeds the time budget for a particular task (dataset), the submission handling process (ingestion program in particular) will be killed when time budget is used up and the prediction made will be used for evaluation. In the other case where a submission exceeds the total compute time per day, all running tasks will be killed by CodaLab and the status will be marked 'Failed' and a score of -1.0 will be produced.

The time budget is too small, can you increase it?

No, sorry, not for this challenge.

What metric are you using?

ROC AUC is used per dataset. More info on evaluation can be found at "Get Started - Evaluation".

Which version of Python are you using?

The code was tested under Python 3.6.9. We are running Python 3.6.9 on the server and the same libraries are available.

Can I use something else than Python code?

Yes. Any Linux executable can run on the system, provided that it fulfills our Python interface and you bundle all necessary libraries with your submission.

Do I have to use TensorFlow?

No. 

Which docker are you running on Codalab?

vergilgxw/autotable:v2.

How do I test my code in the same environment that you are using before submitting?

When you submit code to Codalab, your code is executed inside a Docker container. This environment can be exactly reproduced on your local machine by downloading the corresponding docker image. 

What is meant by "Leaderboard modifying disallowed"?

Your last submission is shown automatically on the leaderboard. You cannot choose which submission to select. If you want another submission than the last one you submitted to "count" and be displayed on the leaderboard, you need to re-submit it.

Can I register multiple times?

No. If you accidentally register multiple times or have multiple accounts from members of the same team, please notify the ORGANIZERS. Teams or solo PARTICIPANTS with multiple accounts will be disqualified.

How can I create a team?

We have disabled Codalab team registration. To join as a team, just share one account with your team. The team leader is responsible for making submissions and observing the rules.

How can I destroy a team?

You cannot. If you need to destroy your team, contact us.

Can I join or leave a team?

It is up to you and the team leader to make arrangements. However, you cannot participate in multiple teams.

Can I cheat by trying to manipulate training/test in save/load functions?

No. Please note that you can only train/predict in the train/predict methods. save/load methods are reserved for saving/loading models only. If we discover that you are trying to cheat in this way you will be disqualified. All your actions are logged and your code will be examined if you win.

Can I give an arbitrary hard time to the ORGANIZERS?

ALL INFORMATION, SOFTWARE, DOCUMENTATION, AND DATA ARE PROVIDED "AS-IS". UPSUD, CHALEARN, IDF, AND/OR OTHER ORGANIZERS AND SPONSORS DISCLAIM ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR ANY PARTICULAR PURPOSE, AND THE WARRANTY OF NON-INFRIGEMENT OF ANY THIRD PARTY'S INTELLECTUAL PROPERTY RIGHTS. IN NO EVENT SHALL ISABELLE GUYON AND/OR OTHER ORGANIZERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF SOFTWARE, DOCUMENTS, MATERIALS, PUBLICATIONS, OR INFORMATION MADE AVAILABLE FOR THE CHALLENGE. In case of dispute or possible exclusion/disqualification from the competition, the PARTICIPANTS agree not to take immediate legal action against the ORGANIZERS or SPONSORS. Decisions can be appealed by submitting a letter to the CHALEARN president, and disputes will be resolved by the CHALEARN board of directors. See contact information.

Where can I get additional help?

For questions of general interest, THE PARTICIPANTS should post their questions to the forum.

Other questions should be directed to the organizers.

Feedback Phase

Start: Sept. 24, 2019, 5 p.m.

Description: Please make submissions by clicking on following 'Submit' button. Then you can view the submission results of your algorithm on each dataset in corresponding tab (Dataset 1, Dataset 2, etc).

Datasets:

Color Label Description Start
Dataset 1 None Sept. 24, 2019, 5 p.m.
Dataset 2 None Sept. 24, 2019, 5 p.m.
Dataset 3 None Sept. 24, 2019, 5 p.m.
Dataset 4 None Sept. 24, 2019, 5 p.m.
Dataset 5 None Sept. 24, 2019, 5 p.m.
Dataset 6 None Sept. 24, 2019, 5 p.m.
Dataset 7 None Sept. 24, 2019, 5 p.m.
Dataset 8 None Sept. 24, 2019, 5 p.m.
Dataset 9 None Sept. 24, 2019, 5 p.m.
Dataset 10 None Sept. 24, 2019, 5 p.m.
Dataset 11 None Sept. 24, 2019, 5 p.m.
Dataset 12 None Sept. 24, 2019, 5 p.m.
Dataset 13 None Sept. 24, 2019, 5 p.m.
Dataset 14 None Sept. 24, 2019, 5 p.m.
Dataset 15 None Sept. 24, 2019, 5 p.m.
Dataset 16 None Sept. 24, 2019, 5 p.m.
Dataset 17 None Sept. 24, 2019, 5 p.m.
Dataset 18 None Sept. 24, 2019, 5 p.m.

Competition Ends

Oct. 29, 2019, 3:59 p.m.

You must be logged in to participate in competitions.

Sign In