This is the Task 1 (Classification of Myopic Maculopathy) of the challenge, see Task 2 and Task 3 for the other two tasks.
Introduction
Myopia is a common oculopathy that affects large populations in the world [1]. More seriously, myopia may further develop into high myopia in which the visual impairment mainly results from the development of different types of myopic maculopathy [2, 3]. In many countries, such as Japan, China, Denmark and the United States, myopic maculopathy is one of the leading causes of visual impairment and legal blindness worldwide [4, 5]. According to the severity, myopic maculopathy can be classified into five categories: no macular lesions, tessellated fundus, diffuse chorioretinal atrophy, patchy chorioretinal atrophy and macular atrophy [4]. In addition, three additional "Plus" lesions are also defined and added to these categories: lacquer cracks (Lc), choroidal neovascularization (CNV), and Fuchs spot (Fs). Myopic maculopathy is likely to progress more quickly after the stage of tessellated fundus [3]. It is estimated that about 90% of eyes with CNV showed a progression of myopic maculopathy [3]. At present, fundus photography is a commonly used imaging modality in the diagnosis of myopic maculopathy. The advantage of fundus photography modality is that it can help doctors diagnose myopic maculopathy accurately and quickly, and can also achieve spherical equivalent prediction without mydriasis. Prompt screening and intervention are necessary to prevent the further progression of myopic maculopathy to avoid vision loss. However, the myopic maculopathy diagnosis is limited by the manual inspection process of image by image, which is time-consuming and relies heavily on the experience of ophthalmologists. Therefore, an effective computer-aided system is essential to help ophthalmologists analyze myopic maculopathy, further providing accurate diagnosis and reasonable intervention for this disease.
Aiming to advance the state-of-the-art in automatic myopic maculopathy analysis, we organize the myopic maculopathy analysis challenge. The challenge encourages researchers to develop algorithms for different tasks in myopic maculopathy analysis using fundus photography, including classification of myopic maculopathy, segmentation of myopic maculopathy plus lesions and spherical equivalent prediction. On the one hand, the classification and segmentation tasks for myopic maculopathy provide a basis for automated analysis of myopic maculopathy in clinical practice. On the other hand, higher degrees of myopia are associated with an increased risk of severe types of myopic maculopathy [5], so the prediction of spherical equivalent can help diagnose the risk of macular maculopathy. With this dataset, various algorithms can test their performance and make a fair comparison with other algorithms. As far as we know, it is the first dataset that covers the classification and segmentation of myopic maculopathy, and the prediction of spherical equivalent, with fundus images. We believe this challenge is an important milestone in myopic maculopathy analysis and hope that the challenge will drive forward innovation for automatic medical image analysis.
The challenge MMAC2023 is a first edition associated with MICCAI2023. Three tasks are proposed (participants are free to choose one or more tasks):
[1] Holden B A, Fricke T R, Wilson D A, et al. Global prevalence of myopia and high myopia and temporal trends from 2000 through 2050[J]. Ophthalmology, 2016, 123(5): 1036-1042.
[2] Ikuno Y. Overview of the complications of high myopia[J]. Retina, 2017, 37(12): 2347-2351.
[3] Silva R. Myopic maculopathy: a review[J]. Ophthalmologica, 2012, 228(4): 197-213.
[4] Ohno-Matsui K, Kawasaki R, Jonas J B, et al. International photographic classification and grading system for myopic maculopathy[J]. American journal of ophthalmology, 2015, 159(5): 877-883. e7.
[5] Yokoi T, Ohno-Matsui K. Diagnosis and treatment of myopic maculopathy[J]. The Asia-Pacific Journal of Ophthalmology, 2018, 7(6): 415-421.
* For more questions, please email us at mmac23miccai@163.com or start a thread in the challenge forum.
The metrics calculation code can be found here.
Task 1: Quadratic-weighted Kappa (QWK), F1 score, Specificity.
Task 2: Dice Similarity Coefficient (DSC).
Task 3: Coefficient of determination R-squared, Mean Absolute Error (MAE).
For each task, the best results on test set will be used for ranking. The ranking methods are as follows:
Task 1: First, separate metric scores are computed for quadratic-weighted kappa, F1 score and specificity on all test cases. Second, the ranking score is obtained by averaging the scores of all metrics. The achieved quadratic-weighted Kappa will be used as a tie-break.
Task 2: The dice similarity coefficient is calculated for each class independently and then results are averaged for the final ranking. In case of a tie, the recall and precision will be used for auxiliary ranking.
Task 3: First, separate rankings are computed for R-squared and mean absolute error on all test cases. Second, the mean ranking of each team is computed as the average of the two rankings. The achieved R-squared will be used as a tie-break.
During this phase, participants will have the opportunity to evaluate their methods on a validation set and ensure that their code submission is in the correct format. Please note that this phase does not count towards final rankings. Once participants complete their submission, the scoring program will run automatically, and the evaluation results will be displayed on the leaderboard. The best results among multiple submissions will be displayed on the leaderboard.
During this phase, the submissions from each team will be evaluated automatically on the test set. The leaderboard will be hidden and not visible to the participants. Each team is allowed a maximum of four submissions, and the best result will be taken as the final challenge result. Once the challenge is over, we will publicly release the evaluation results on the test set, as well as the rankings of each team. Please note that only submissions from TEAM LEADER are eligible for the final ranking.
After the competition, participants are given the opportunity to evaluate their algorithms by submitting the results of the test set for evaluation. Please note that the Future Test Phase is not part of MMAC, and the submissions in this phase will not affect the final winners of the competition. The winners will be determined solely based on the results from the Test Phase.
Participants of this challenge acknowledge that they have read and agree to the following rules:
To qualify for the official ranking, participants must submit a corresponding paper outlining the key steps of their final method, including data preprocessing and augmentation, method description, post-processing, and results. The paper must follow the Springer Lecture Notes in Computer Science (LNCS) format, which is available in Overleaf.
[2023.08.28] Please note that only teams or individuals submitting challenge papers are eligible for the final rankings.
[2023.08.28] The validation set for each task has been released. The download link can be found in the "Get Data" section of the "participate" tab. At present, there are no plans to release the test set to the public. If participating teams are interested in evaluating the model's performance on the test set, including ablation experiments and other analyses, we kindly request that you submit and validate your mothods in the Future test phase.
[2023.08.16] In light of a ten-day extension to the test phase deadline, we have considered the situation of certain teams depleting their allotted number of submissions. For participants who have previously submitted during this phase, we now offer the opportunity to contact us via our official email address. They can request the removal of their prior submissions, enabling them to make new submissions within the test phase.
[2023.08.14] In response to valuable feedback from participants, and with the aim of allowing ample time for challenge submissions, we have decided to extend the submission deadline. The new deadline for test set submissions August 25, 2023. The new deadline for paper submissions is September 15, 2023.
[2023.08.14] In relation to Tasks 2 and 3, we have noted that several teams successfully submitted their results during the validation phase but have yet to submit during the subsequent test phase. We strongly recommend that these teams, as well as all other participating teams, make an active submission to win the challenge. Your active participation not only enhances your chances of winning but also contributes to the overall competition's success.
[2023.08.14] We have thoroughly refined the instructions for paper submissions. For detailed guidelines, please refer to the updated "Paper submission" section in "Terms and Conditions" tab.
[2023.07.25] Based on the entries and feedback from the validation phase, the code execution time limit for the test phase is 14,400 seconds.
[2023.07.21] Python Packages has been updated and will remain unchanged. New installation requests for the Python package will no longer be accepted.
[2023.07.18] The limit on the number of submissions in validation phase has been increased to 400.
[2023.07.13] The deadline for the submission of validation sets has been extended to July 25th.
[2023.07.13] Python Packages has been updated.
[2023.07.13] To enable participants to conduct code correctness checks in the validation phase using the latest Python environment, and to ensure consistency in Python package usage between the validation phase and test phase, the deadline for Python package installation requests is July 20th.
[2023.06.25] Python Packages has been updated.
[2023.06.18] The training set for task 2 has been released. The download link can be found in the "Get Data" section of the "participate" tab.
[2023.05.31] More images have been added to the training set for task 1.
[2023.05.25] The training sets for task 1 and task 3 have been released. The download link can be found in the "Get Data" section of the "participate" tab.
[2023.04.23] The dataset has not been released yet, once released, the dataset download link will be sent to approved participants via email.
Q: Can I use external data?
A: The details regarding the use of external data can be found on the "Terms and Conditions" tab. Please note that the UKbiobank dataset is not permitted for use as it is a fee-based dataset and not accessible to the majority of participants. You are only allowed to use datasets that are freely available to the public.
Q: Why doesn't my team name appear on the leaderboard?
A: Each participant must first complete the team registration form. Then, to ensure that your team name appears on the challenge leaderboard, follow these steps to set up your team name in your CodaLab account:
1. Log in to your CodaLab account.
2. Click on your account name located in the top right-hand corner of the page.
3. Enter your team name in the designated "Team name" field.
Q: Why do my submissions always fail?
A: For detailed error information, you can review the ingestion error log file.
Q: Why is my request to join still pending?
A: To ensure your request to join is processed promptly, please ensure that you have completed all the steps listed on the "Join the challenge" page and filled in the required information accurately. It is important to note that some participants may fill out the challenge registration form and select three tasks but fail to submit a task request on CodaLab. In such cases, we will only approve the tasks for which a registration request has been made on CodaLab. If you still need to participate in a task, kindly fill out the challenge registration form again.
Q: Can I request to add python packages in the challenge submission docker environment?
A: Participants can make requests for the installation of additional Python packages by completing the python package request form. However, please note that not all requested python packages can be added, as some may conflict with existing environments. In such cases, we recommend including the required python packages or the corresponding code in your submission. Please be aware that python package addition requests will not be considered after the validation phase. The deadline for Python package installation requests is July 20, 2023.
* Note: If you plan to participate in multiple tasks, you will only need to fill out the challenge registration form once.
Participants are encouraged to form teams, and there are no limitations on the number of members in each team. To join or create a team, each member must complete the Team Registration form, which includes providing their CodaLab username, email address, team name, and team leader. However, please note the following rules:
The MMAC challenge is held on the CodaLab platform in a code submission format. The sample code submission can be downloaded in the "Files" section of the "participate" tab. The submission includes the following files in a zip archive:
MMAC dataset download. The MMAC dataset has been deposited to the Zenodo data repository (DOI 10.5281/zenodo.11025749) and can be downloaded from https://zenodo.org/records/11025749.
Algorithm papers. The 15 submitted algorithms - 7 for Task 1, 4 for Task 2, and 4 for Task 3 - have been detailed in 11 papers published in the Challenge Proceedings, which can be downloaded from https://link.springer.com/book/10.1007/978-3-031-54857-4.
Evaluation codes & model weights. The evaluation codes and model weights submitted in the test phase for the 15 algorithms can be downloaded from this link. Researchers can use these codes and weights to test the performance of these algorithms on their own datasets.
Training code. The training code for each team is available at the following GitHub link. Researchers can further develop their own algorithms based on these existing models or train the model using their own datasets.
If you use this dataset for your research, please cite our paper:
@article{qian2024competition,
title={A competition for the diagnosis of myopic maculopathy by artificial intelligence algorithms},
author={Qian, Bo and Sheng, Bin and Chen, Hao and Wang, Xiangning and Li, Tingyao and Jin, Yixiao and Guan, Zhouyu and Jiang, Zehua and Wu, Yilan and Wang, Jinyuan and others},
journal={JAMA ophthalmology},
volume={142},
number={11},
pages={1006--1015},
year={2024},
publisher={American Medical Association}
}
Challenge chairs
Challenge committee chairs
Challenge committee members
Technical support
Start: June 1, 2023, 8 a.m.
Description: Evaluation on validation set. This phase does not count towards final task rankings.
Start: July 25, 2023, 8 a.m.
Description: Evaluation on test set.
Start: Aug. 25, 2023, 11:59 p.m.
Description: Subsequent participants can submit results to evaluate their algorithms, but will not be awarded prizes or certificates.
Aug. 25, 2023, 11:59 p.m.
You must be logged in to participate in competitions.
Sign In