MICCAI MMAC 2023 - Myopic Maculopathy Analysis Challenge - Task 1

Organized by mmac - Current server time: March 28, 2025, 1:53 p.m. UTC

Previous

Futher Test Phase - Task 1
Aug. 25, 2023, 11:59 p.m. UTC

Current

Futher Test Phase - Task 1
Aug. 25, 2023, 11:59 p.m. UTC

End

Competition Ends
Aug. 25, 2023, 11:59 p.m. UTC

Myopic Maculopathy Analysis Challenge 2023

 

The challenge has ended. The final ranking can be found at this link.

The MMAC dataset, algorithms, and codes can be download under the Dataset and Code Download tab.

 

This is the Task 1 (Classification of Myopic Maculopathy) of the challenge, see Task 2 and Task 3 for the other two tasks.

Introduction

Myopia is a common oculopathy that affects large populations in the world [1]. More seriously, myopia may further develop into high myopia in which the visual impairment mainly results from the development of different types of myopic maculopathy [2, 3]. In many countries, such as Japan, China, Denmark and the United States, myopic maculopathy is one of the leading causes of visual impairment and legal blindness worldwide [4, 5]. According to the severity, myopic maculopathy can be classified into five categories: no macular lesions, tessellated fundus, diffuse chorioretinal atrophy, patchy chorioretinal atrophy and macular atrophy [4]. In addition, three additional "Plus" lesions are also defined and added to these categories: lacquer cracks (Lc), choroidal neovascularization (CNV), and Fuchs spot (Fs). Myopic maculopathy is likely to progress more quickly after the stage of tessellated fundus [3]. It is estimated that about 90% of eyes with CNV showed a progression of myopic maculopathy [3]. At present, fundus photography is a commonly used imaging modality in the diagnosis of myopic maculopathy. The advantage of fundus photography modality is that it can help doctors diagnose myopic maculopathy accurately and quickly, and can also achieve spherical equivalent prediction without mydriasis. Prompt screening and intervention are necessary to prevent the further progression of myopic maculopathy to avoid vision loss. However, the myopic maculopathy diagnosis is limited by the manual inspection process of image by image, which is time-consuming and relies heavily on the experience of ophthalmologists. Therefore, an effective computer-aided system is essential to help ophthalmologists analyze myopic maculopathy, further providing accurate diagnosis and reasonable intervention for this disease.

Aiming to advance the state-of-the-art in automatic myopic maculopathy analysis, we organize the myopic maculopathy analysis challenge. The challenge encourages researchers to develop algorithms for different tasks in myopic maculopathy analysis using fundus photography, including classification of myopic maculopathy, segmentation of myopic maculopathy plus lesions and spherical equivalent prediction. On the one hand, the classification and segmentation tasks for myopic maculopathy provide a basis for automated analysis of myopic maculopathy in clinical practice. On the other hand, higher degrees of myopia are associated with an increased risk of severe types of myopic maculopathy [5], so the prediction of spherical equivalent can help diagnose the risk of macular maculopathy. With this dataset, various algorithms can test their performance and make a fair comparison with other algorithms. As far as we know, it is the first dataset that covers the classification and segmentation of myopic maculopathy, and the prediction of spherical equivalent, with fundus images. We believe this challenge is an important milestone in myopic maculopathy analysis and hope that the challenge will drive forward innovation for automatic medical image analysis.


Challenge description

The challenge MMAC2023 is a first edition associated with MICCAI2023. Three tasks are proposed (participants are free to choose one or more tasks):

  • Task 1: Classification of myopic maculopathy.
  • Task 2: Segmentation of myopic maculopathy plus lesions.
  • Task 3: Prediction of spherical equivalent.

Important dates

  • Release of training data: May 25th 2023 for Task 1 and Task 3, June 18th 2023 for Task 2.
  • Open for submission on validation set: June 1st 2023 for Task 1 and Task 3, June 18th 2023 for Task 2
  • Submission deadline on validation set: July 15th 25th 2023.
  • Open for submission on test set: July 15th 25th 2023.
  • Submission deadline on test set: August 15th 25th 2023.
  • Submission deadline for method description paper: September 5th 15th 2023.
  • Winner and invitation speakers: No later than September 30th 2023.
  • Associated workshop day: October 8th or 12th at the MICCAI 2023 workshop.

Awards

  • Certificates will be awarded to the top three teams in each task.
  • The top ranked teams will be invited to give oral presentations during MICCAI 2023 Workshop.
  • The first and last authors of the submitted paper from the top ranked teams in each task will have the opportunity to co-author the challenge summary paper.
  • All teams will have the chance to publish their methodology papers in the MICCAI 2023 Challenge Proceedings.

References

[1] Holden B A, Fricke T R, Wilson D A, et al. Global prevalence of myopia and high myopia and temporal trends from 2000 through 2050[J]. Ophthalmology, 2016, 123(5): 1036-1042.

[2] Ikuno Y. Overview of the complications of high myopia[J]. Retina, 2017, 37(12): 2347-2351.

[3] Silva R. Myopic maculopathy: a review[J]. Ophthalmologica, 2012, 228(4): 197-213.

[4] Ohno-Matsui K, Kawasaki R, Jonas J B, et al. International photographic classification and grading system for myopic maculopathy[J]. American journal of ophthalmology, 2015, 159(5): 877-883. e7.

[5] Yokoi T, Ohno-Matsui K. Diagnosis and treatment of myopic maculopathy[J]. The Asia-Pacific Journal of Ophthalmology, 2018, 7(6): 415-421.

 

* For more questions, please email us at mmac23miccai@163.com or start a thread in the challenge forum.

Metrics

The metrics calculation code can be found here.

Task 1: Quadratic-weighted Kappa (QWK), F1 score, Specificity.

Task 2: Dice Similarity Coefficient (DSC).

Task 3: Coefficient of determination R-squared, Mean Absolute Error (MAE).

Ranking methods

For each task, the best results on test set will be used for ranking. The ranking methods are as follows:

Task 1: First, separate metric scores are computed for quadratic-weighted kappa, F1 score and specificity on all test cases. Second, the ranking score is obtained by averaging the scores of all metrics. The achieved quadratic-weighted Kappa will be used as a tie-break.

Task 2: The dice similarity coefficient is calculated for each class independently and then results are averaged for the final ranking. In case of a tie, the recall and precision will be used for auxiliary ranking.

Task 3: First, separate rankings are computed for R-squared and mean absolute error on all test cases. Second, the mean ranking of each team is computed as the average of the two rankings. The achieved R-squared will be used as a tie-break.

Validation phase

During this phase, participants will have the opportunity to evaluate their methods on a validation set and ensure that their code submission is in the correct format. Please note that this phase does not count towards final rankings. Once participants complete their submission, the scoring program will run automatically, and the evaluation results will be displayed on the leaderboard. The best results among multiple submissions will be displayed on the leaderboard.

Test phase

During this phase, the submissions from each team will be evaluated automatically on the test set. The leaderboard will be hidden and not visible to the participants. Each team is allowed a maximum of four submissions, and the best result will be taken as the final challenge result. Once the challenge is over, we will publicly release the evaluation results on the test set, as well as the rankings of each team. Please note that only submissions from TEAM LEADER are eligible for the final ranking.

Future test phase

After the competition, participants are given the opportunity to evaluate their algorithms by submitting the results of the test set for evaluation. Please note that the Future Test Phase is not part of MMAC, and the submissions in this phase will not affect the final winners of the competition. The winners will be determined solely based on the results from the Test Phase.

Challenge rules

Participants of this challenge acknowledge that they have read and agree to the following rules:

  • Only fully automated methods will be accepted as submissions, and manual annotations and interactive methods are not permitted.
  • Each participant must create a CodaLab account to register. Only one account per user is allowed.
  • Participants are allowed to form teams with no limitations on the number of team members. However, a participant may not participate in more than one team. It is also forbidden for a team member to participate also as an individual in this challenge.
  • For all tasks, participants are allowed to use external data other than the challenge data to develop and test their models submitted to the challenge, provided that the external data used are publicly available and clearly stated in the submitted paper. This includes datasets and pre-trained models. If participants use their private external data, they must make it publicly available and declare it in the challenge discussion forum no later than 15 days before the challenge ends.

Paper submission

  • To qualify for the official ranking, participants must submit a corresponding paper outlining the key steps of their final method, including data preprocessing and augmentation, method description, post-processing, and results. The paper must follow the Springer Lecture Notes in Computer Science (LNCS) format, which is available in Overleaf.

  • When participating in multiple tasks, you can either submit a single paper reporting all methods and results or several papers.
  • The paper should have a minimum of 4 pages. For papers intended for publication in the MICCAI Challenge Proceedings, we prioritize those with a length of at least 6 pages.
  • Supplemental materials: For each submitted method, the participating team or individual must submit a table summarizing the proposed method, along with a figure illustrating the method pipeline or highlighting key techniques used. Additionally, a detailed paragraph describing the method pipeline should be included. It is important to note that these submissions may be used for post-challenge publicity purposes. Furthermore, for top-ranked methods, this information will be integrated into the summary paper, with the first and last authors of the submitted paper recognized as co-authors. The template and an example of supplemental materials can be accessed via this link.
  • Submission of the method description paper, supplemental materials, and License-to-Publish form can be uploaded to this Paper-Submission-Form.

MICCAI 2023 Challenge Proceedings

  • Once the challenge is complete, the organizers aim to publish a challenge proceeding that is a part of the MICCAI 2023 Challenge Proceedings. As part of this process, we will coordinate the review of the submitted papers, providing constructive advice for revision. The finally accepted papers will be published in MICCAI 2023 Challenge Proceedings.
  • Authors should consult Springer's Instructions for Authors of Proceedings and use either the LaTeX or the Word templates provided on the authors' page for the preparation of their papers. Springer encourages authors to include their ORCIDs in their papers.
  • The corresponding author of each paper, acting on behalf of all of the authors of that paper, must complete and sign a License-to-Publish form. The corresponding author signing the copyright form should match the corresponding author marked on the paper. Once the files have been sent to Springer, changes relating to the authorship of the papers cannot be made. Please attach the completed signed form to your email, along with the paper.
  • The participants are encouraged to release their code and add the GitHub link to their papers.

Summary paper publication

  • Upon completion of the challenge, the organizers plan to publish a challenge summary paper in a peer-reviewed journal. The top ranked methods for each task will be featured in the summary paper. Furthermore, the first and last authors of the submitted paper will be recognized as authors in the summary paper. To maintain fairness, these teams are required to submit the corresponding papers and source code.

News

[2023.08.28] Please note that only teams or individuals submitting challenge papers are eligible for the final rankings.

[2023.08.28] The validation set for each task has been released. The download link can be found in the "Get Data" section of the "participate" tab. At present, there are no plans to release the test set to the public. If participating teams are interested in evaluating the model's performance on the test set, including ablation experiments and other analyses, we kindly request that you submit and validate your mothods in the Future test phase.

[2023.08.16] In light of a ten-day extension to the test phase deadline, we have considered the situation of certain teams depleting their allotted number of submissions. For participants who have previously submitted during this phase, we now offer the opportunity to contact us via our official email address. They can request the removal of their prior submissions, enabling them to make new submissions within the test phase.

[2023.08.14] In response to valuable feedback from participants, and with the aim of allowing ample time for challenge submissions, we have decided to extend the submission deadline. The new deadline for test set submissions August 25, 2023. The new deadline for paper submissions is September 15, 2023.

[2023.08.14] In relation to Tasks 2 and 3, we have noted that several teams successfully submitted their results during the validation phase but have yet to submit during the subsequent test phase. We strongly recommend that these teams, as well as all other participating teams, make an active submission to win the challenge. Your active participation not only enhances your chances of winning but also contributes to the overall competition's success.

[2023.08.14] We have thoroughly refined the instructions for paper submissions. For detailed guidelines, please refer to the updated "Paper submission" section in "Terms and Conditions" tab.

[2023.07.25] Based on the entries and feedback from the validation phase, the code execution time limit for the test phase is 14,400 seconds.

[2023.07.21] Python Packages has been updated and will remain unchanged. New installation requests for the Python package will no longer be accepted.

[2023.07.18] The limit on the number of submissions in validation phase has been increased to 400.

[2023.07.13] The deadline for the submission of validation sets has been extended to July 25th.

[2023.07.13] Python Packages has been updated.

[2023.07.13] To enable participants to conduct code correctness checks in the validation phase using the latest Python environment, and to ensure consistency in Python package usage between the validation phase and test phase, the deadline for Python package installation requests is July 20th.

[2023.06.25] Python Packages has been updated.

[2023.06.18] The training set for task 2 has been released. The download link can be found in the "Get Data" section of the "participate" tab.

[2023.05.31] More images have been added to the training set for task 1.

[2023.05.25] The training sets for task 1 and task 3 have been released. The download link can be found in the "Get Data" section of the "participate" tab.

[2023.04.23] The dataset has not been released yet, once released, the dataset download link will be sent to approved participants via email.

 

FAQ

Q: Can I use external data?

A: The details regarding the use of external data can be found on the "Terms and Conditions" tab. Please note that the UKbiobank dataset is not permitted for use as it is a fee-based dataset and not accessible to the majority of participants. You are only allowed to use datasets that are freely available to the public.

Q: Why doesn't my team name appear on the leaderboard?

A: Each participant must first complete the team registration form. Then, to ensure that your team name appears on the challenge leaderboard, follow these steps to set up your team name in your CodaLab account:

1. Log in to your CodaLab account.
2. Click on your account name located in the top right-hand corner of the page.
3. Enter your team name in the designated "Team name" field.

Q: Why do my submissions always fail?

A: For detailed error information, you can review the ingestion error log file.

Q: Why is my request to join still pending?

A: To ensure your request to join is processed promptly, please ensure that you have completed all the steps listed on the "Join the challenge" page and filled in the required information accurately. It is important to note that some participants may fill out the challenge registration form and select three tasks but fail to submit a task request on CodaLab. In such cases, we will only approve the tasks for which a registration request has been made on CodaLab. If you still need to participate in a task, kindly fill out the challenge registration form again.

Q: Can I request to add python packages in the challenge submission docker environment?

A: Participants can make requests for the installation of additional Python packages by completing the python package request form. However, please note that not all requested python packages can be added, as some may conflict with existing environments. In such cases, we recommend including the required python packages or the corresponding code in your submission. Please be aware that python package addition requests will not be considered after the validation phase. The deadline for Python package installation requests is July 20, 2023.

 

Join the challenge

  1. To get started, create a CodaLab account or log in to your existing one.
  2. Once you've logged in, navigate to the "participate" tab and register for the challenge.
  3. To complete your registration, you must fill out the Challenge Registration form. During this process, ensure that you download and complete the MMAC Challenge Rules Agreement Consent. Make sure to hand sign the document in the "signature" section and name the file "MMAC Challenge Rule Agreement_Your First Name_Your Last Name" before uploading it in PDF format.
  4. We will review your registration request within 3 working days. If you don't receive feedback within this timeframe, please reach out to us via the official email address of the challenge.

* Note: If you plan to participate in multiple tasks, you will only need to fill out the challenge registration form once.

Teams

Participants are encouraged to form teams, and there are no limitations on the number of members in each team. To join or create a team, each member must complete the Team Registration form, which includes providing their CodaLab username, email address, team name, and team leader. However, please note the following rules:

  • Before joining or creating a team, every participant is required to complete the registration process as outlined in the "join the challenge" section.
  • Each participant may only join one team, and each team member must have a separate CodaLab account.
  • During the validation phase, each team is provided with a sufficient number of submissions to assess their algorithms. As a best practice, we suggest that the team leader uploads the submission for evaluation instead of all team members.
  • During the test phase, only the team leader is allowed to submit models on behalf of the entire team.
  • All the participants in the team are required to update the "Team name" field in their CodaLab profile.
* Note: Please only use lowercase letters and numbers in your team name! No spacing or special characters.

Code submission guidance

The MMAC challenge is held on the CodaLab platform in a code submission format. The sample code submission can be downloaded in the "Files" section of the "participate" tab. The submission includes the following files in a zip archive:

  1. model.py (MUST) - contains a class named "model". The class must have implementations of "load", "init" and "predict" functions.
    • init - initialization function of the model class.
    • load - a function that loads the model and model weights - models weights must be in the same folder as model.py.
    • predict - a function that receives one image each time and returns prediction.
    • The file may contain other functions (within the class or outside of it).
    • Imports used by the class must be compatible with the permitted Python packages.
  1. metadata (MUST)
    • Indicates the submission is in a code submission format - do not remove this file.
  1. model weights (Optional).
    • The model weights file can be in any format as long as it is compatible with the model and the permitted Python packages.
    • If the model depends on those weights, this file is mandatory.
Participants are required to ensure that their submitted code is compatible with the Python packages provided in the challenge submission docker environment. It is important to note that these packages are specifically used during the inference process, and participants are free to use different Python packages for their local training. Participants can require the installation of additional Python packages by completing the python package request form. The challenge organizer will then install the requested packages as needed and consistently update the aforementioned list of Python packages.

 

The challenge has ended. We have now made the dataset and algorithm code publicly available.

If you use this dataset for your research, please cite our paper:

@article{qian2024competition,
title={A competition for the diagnosis of myopic maculopathy by artificial intelligence algorithms},
author={Qian, Bo and Sheng, Bin and Chen, Hao and Wang, Xiangning and Li, Tingyao and Jin, Yixiao and Guan, Zhouyu and Jiang, Zehua and Wu, Yilan and Wang, Jinyuan and others},
journal={JAMA ophthalmology},
volume={142},
number={11},
pages={1006--1015},
year={2024},
publisher={American Medical Association}
}

Organizers

Challenge chairs

  • Tien Yin Wong, Tsinghua University, Beijing, China
  • Bin Sheng, Shanghai Jiao Tong University, Shanghai, China

Challenge committee chairs

  • Hao Chen, The Hong Kong University of Science and Technology, Hong Kong, China
  • Xiangning Wang, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China

Challenge committee members

  • Yih-Chung Tham, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
  • Ching-Yu Cheng, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
  • Marcus Ang, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
  • Carol Y Cheung, The Chinese University of Hong Kong, Hong Kong, China
  • Qiang Wu, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China
  • Rongping Dai, Peking Union Medical College Hospital, Beijing, China
  • Xinyuan Zhang, Beijing Tongren Hospital, Beijing, China
  • Jie Shen, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China
  • Feng Lu, Huazhong University of Science and Technology, Wuhan, Hubei, China
  • Mingang Chen, Shangahai Development Center of Computer Software Technology, Shanghai, China
  • Xiaokang Yang, Shanghai Jiao Tong University, Shanghai, China
  • Yaohui Jin, Shanghai Jiao Tong University, Shanghai, China
  • Haitao Song, Shanghai Jiao Tong University, Shanghai, China
  • Yang Wen, Shenzhen University, Shenzhen, Guangdong, China
  • Yinfeng Zheng, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong, China
  • Huating Li, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China
  • Pheng-Ann Heng, The Chinese University of Hong Kong, Hong Kong, China
  • Daniel Ting, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
  • Gavin Siew Wei Tan, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
  • Bo Qian, Shanghai Jiao Tong University, Shanghai, China
  • Tingyao Li, Shanghai Jiao Tong University, Shanghai, China
  • Gengyou Huang, Shanghai Jiao Tong University, Shanghai, China
  • Zhengrui Guo, The Hong Kong University of Science and Technology, Hong Kong, China
  • Tingli Chen, Huadong Sanatorium, Wuxi, Jiangsu, China
  • Jia Shu, Shanghai Jiao Tong University, Shanghai, China
  • Yan Zhou, Peking Union Medical College Hospital, Beijing, China

Technical support

  • Xiuyuan Chen, Shanghai Jiao Tong University, Shanghai, China
  • Shangmin Huang, Shanghai Jiao Tong University, Shanghai, China
  • Ruixue Zhang, Zaozhuang University, Zaozhuang, Shandong, China

Validation Phase - Task 1

Start: June 1, 2023, 8 a.m.

Description: Evaluation on validation set. This phase does not count towards final task rankings.

Test Phase - Task 1

Start: July 25, 2023, 8 a.m.

Description: Evaluation on test set.

Futher Test Phase - Task 1

Start: Aug. 25, 2023, 11:59 p.m.

Description: Subsequent participants can submit results to evaluate their algorithms, but will not be awarded prizes or certificates.

Competition Ends

Aug. 25, 2023, 11:59 p.m.

You must be logged in to participate in competitions.

Sign In