This page contains information about accessing the data, preparing your submission, and the rules for participating. Please read this page carefully. All teams are responsible for knowing and adhering to the guidelines laid out below in order for their submission to be considered. The challenge organizers reserve the right to disqualify any teams who violate these rules.
The training data for this challenge subject to additions and edits between February 1 and July 1, 2021. We are using a GitHub repository to manage version control. The segmentations are stored on GitHub directly, but the imaging must be fetched from separate servers by following the instructions on the repository's README.md.
The image and segmentation data are available under a CC BY-NC-SA 4.0 license. This license specifies noncommercial use (i.e., don't rebrand and sell our data), but we do not consider using the data to compete in this challenge to be a commercial use. We therefore allow (and encourage) teams from the industry to participate.
The task is to develop a model which can predict high-quality segmentations for kidneys, kidney tumors, kidney cysts, ureters, renal arteries, and renal veins. The training data consists of many examples on which models can be trained and validated. An explanation for how teams will be evaluated and ranked can be found on the homepage.
Teams are allowed to use other data in addition to the official training set in order to construct their models, however, that data must be publicly available. This is to prevent unfair advantages for teams that may have amassed large private datasets. All external data use must be described in detail in each team's accompanying paper.
Wondering where to start? Several teams that competed in the 2019 KiTS Challenge have made the code for their submission publicly available. A (not comprehensive) list is given below.
You might also benefit from reading the papers of teams that submitted to KiTS19.
The primary goal of challenges like KiTS is to objectively assess the performance of competing methods. This is only possible if teams provide a complete description of the methods they use. In order to encourage high-quality reporting, we will be implementing two changes this year:
OpenReview.net will be used to manage the editorial process. Papers will be accepted from August 2 until August 23, 2021, and reviews will be returned within one week of upload. In cases where revisions are needed, these revisions must be made and accepted before any submissions are allowed. Therefore, early paper submission is encouraged.
Once accepted, short papers about KiTS21 submissions will be published and indexed just like any peer-reviewed publication. You will be asked to provide a signed "consent to publish" form when you make your submission. By doing so, you are not handing over copyright to your work, but you are giving us permission to post your paper publicly.
The paper submission link can be found here. A paper submission link will be posted here on July 1 August 2, 2021.
Unlike 2019, participants will not be given access to test imaging. Instead, teams must prepare a Docker Container that's capable of reading the test imaging and making predictions on a private server managed by the organizers. These Docker containers will remain the intellectual property of the team making the submission, and they will be used only for the purposes of evaluation for the KiTS21 challenge.
We recognize that not all participants will have had experience with Docker, and we intend to work hard to make this process as smooth and easy as possible. This will include:
In order to make efficient use of data, the "sanity-check" set will consist of just three cases (case_00005, case_00156, and case_00151) two cases (case_00000 and case_00001), and should be used only to ensure your container is providing expected results. No official "validation" set will be released since we believe teams should have the freedom to validate as they see fit.
In order to ensure that inference will run on the test set in a reasonable amount of time, teams must demonstrate that their container will finish its predictions on the sanity-check set in less than one hour. Containers will be run on AWS instances with access to a single NVIDIA T4 card (16GB GPU memory) and 4 CPUs with 30GB of CPU memory.
More details about the submission process will be released here prior to July 1 July 15, 2021.
A brief KiTS21 satellite event at MICCAI 2021 in Strasbourg, France (knock on wood) will be held on September 27, 2021. The top-5 scoring teams will be invited to give oral presentations and receive certificates, and the #1 team will be awarded a $5000 prize sponsored by Histosonics, Inc.
I have a question about the rules. Where should I ask it?
The Discourse Forum is the best place to ask questions about the rules.
I found a problem with the data. Where should I post it?
Unlike 2019, we'd prefer that you discuss issues with the data on the Discourse Forum. We want to make sure these issues are discussed where participants are most likely to see them.
Are companies allowed to compete in KiTS21?
Yes! We do not consider participation to violate the terms of the CC BY-NC-SA license.
Am I required to make my source code public?
Should I make my source code public?
I've cloned the repository but I only see the segmentations. Where is the imaging?
The repository's README.md file has instructions for fetching the image data after you've cloned or pulled the repository.
Am I allowed to use outside data to help train my nets?
You may use outside data only if it is publicly available. Note that you must report any outside data use in your short paper.
Is there clinical data available for these patients?
Yes! The KiTS19 repository's data/kits.json file contains significant clinical data about several hundred of the cases, and it's updated periodically. We hope to release analogous data the cases that are new to the 2021 challenge, but this might take some time.
Can I use this data for an unrelated paper?