Few-shot RAW Image Denoising @ MIPI-challenge

Organized by Srameo - Current server time: April 27, 2025, 6:07 p.m. UTC

First phase

Validation
Jan. 19, 2024, 7:59 a.m. UTC

End

Competition Ends
March 6, 2024, 7:59 a.m. UTC

Few-shot RAW Image Denoising @ MIPI-challenge

News!

  • The few-shot image denoising track starts now! We release validation data and training data. Check out the "Get data" page and prepare the submission!

Important dates

  • 2024.01.10 Challenge site online

  • 2024.01.15 Release of train data (paired images) and validation data (inputs only)

  • 2024.01.19 Validation server online

  • 2024.03.01 Release of test data (input only), test server online

  • 2024.03.06 Test results submission deadline, test server closed

  • 2024.03.06 Fact sheets and code/executable submission deadline

Overview

Few-shot raw image denoisng is geared towards training neural networks for raw image denoising in scenarios where paired data is limited. Initially, the prevailing practice involved using paired data for training neural networks to perform denoising tasks. However, as the challenges of acquiring extensive paired data and the advancements in noise modeling grounded in physical principles became apparent, an increasing number of methods have started to embrace synthetic noise-based strategies. Nonetheless, these approaches necessitate a multi-step process involving: 1) gathering calibration data, 2) computing noise model parameters, and 3) training networks from scratch. This process can be quite cumbersome, particularly when dealing with various sensor types, each requiring its specialized denoising network. Consequently, leveraging few-shot paired data, which allows bypassing the calibration data collection and noise parameter estimation stages, can effectively alleviate the limitations associated with calibration algorithms.

In this competition, we will furnish a dataset from two distinct cameras, capturing images in diverse scenes, featuring varying levels of additional digital gain. This dataset will be used for both few-shot training and testing purposes. These cameras represent a range of well-known brands and encompass both APS-C and full-frame sensor cameras. Besides, the result will be evaluated in both SSIM and PSNR in the sRGB domain.

More details are found on the data section of the competition.

Get Starting-kit

A starting-kit can be found on Github.

Submission

The training data is already made available to the registered participants.

General Rules

Please check the terms and conditions for further rules and details.

Reference

Jin, Xin, et al. "Lighting every darkness in two pairs: A calibration-free pipeline for raw denoising." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.

Wei, Kaixuan, et al. "Physics-based noise modeling for extreme low-light photography." IEEE Transactions on Pattern Analysis and Machine Intelligence 44.11 (2021): 8520-8537.

Contact Us

If you have any questions, please feel free to post threads on 'Forum' tab and discuss related topics. You could also contact us by sending an email to organizers mipi.challenge@gmail.com with title 'Few-shot RAW Image Denoising Inquiry'.

 

Evaluation Criteria

Due to the large size of RAW data, which is not suitable for direct upload, our evaluation metrics are calculated within the sRGB color space. We assess the performance by measuring the discrepancy between the results and the ground truth images.

We employ the standard Peak Signal To Noise Ratio (PSNR) and the Structural Similarity Index (SSIM) in grayscale, as is commonly used in the literature. The final evaluation metric can be calculated using the following formula:

$$Score=\log_k(SSIM*k^{PSNR})=PSNR-\log_k(SSIM)$$

In our implementation, $k=1.2$.

For the final ranking, we will use the average Score as the primary measure. The complexity of the algorithm will only serve as a reference and will not be included in the final metric. Please refer to the evaluation function in the 'evaluate.py' of the scoring program.

Submission

During the development phase, the participants can submit their results on the validation set to the CodaLab server. The validation set should only be used for evaluation and analysis purposes but NOT for training. At the testing phase, the participants will submit the whole restoration results of the test set. This should match the last submission to the CodaLab.

Terms and Conditions

General Rules

The Few-shot RAW Image Denoising Challenge is one track of MIPI-Challenge, Mobile Intelligent Photography & Imaging Workshop 2024, in conjunction with CVPR 2024. Participants are not restricted to train their algorithms only on the provided dataset. Other PUBLIC dataset can be used as well. Participants are expected to develop more robust and generalized methods for few-shot raw image denoising in real-world scenarios.

When participating in the competition, please be reminded that:

  • Results in the correct format must be uploaded to the evaluation server. The evaluation page lists detailed information regarding how results will be evaluated.
  • Each entry must be associated to a team and provide its affiliation.
  • Using multiple accounts to increase the number of submissions and private sharing outside teams are strictly prohibited.
  • The organizer reserves the absolute right to disqualify entries which is incomplete or illegible, late entries or entries that violate the rules.
  • The organizer reserves the right to adjust the competition schedule and rules based on situations.
  • The best entry of each team will be public in the leaderboard at all time.
  • To compete for awards, the participants must fill out a factsheet briefly describing their methods. There is no other publication requirement.

Terms of Use: Dataset

Before downloading and using the dataset, please agree to the following terms of use. You, your employer and your affiliations are referred to as "User". The organizers and their affiliations, are referred to as "Producer".

  • All the data is used for non-commercial/non-profit research purposes only.
  • All the images in the dataset can be used for academic purposes.
  • The User takes full responsibility for any consequence caused by his/her use of the dataset in any form and shall defend and indemnify the Producer against all claims arising from such uses.
  • The User should NOT distribute, copy, reproduce, disclose, assign, sublicense, embed, host, transfer, sell, trade, or resell any portion of the dataset to any third party for any purpose.
  • The User can provide his/her research associates and colleagues with access to dataset (the download link or the dataset itself) provided that he/she agrees to be bound by these terms of use and guarantees that his/her research associates and colleagues agree to be bound by these terms of use.
  • The User should NOT remove or alter any copyright, trademark, or other proprietary notices appearing on or in copies of the dataset.
  • This agreement is effective for any potential User of the dataset upon the date that the User first accesses the dataset in any form.
  • The Producer reserves the right to terminate the User's access to the dataset at any time.
  • For using the dataset, please consider citing the paper (if any):
    @inproceedings{jiniccv23led,
        title={Lighting Every Darkness in Two Pairs: A Calibration-Free Pipeline for RAW Denoising},
        author={Jin, Xin and Xiao, Jia-Wen and Han, Ling-Hao and Guo, Chunle and Zhang, Ruixun and Liu, Xialei and Li, Chongyi},
        journal={Proceedings of the IEEE/CVF International Conference on Computer Vision},
        year={2023}
    }
    
    @inproceedings{jin2023make,
      title={Make Explict Calibration Implicit: "Calibrate" Denoiser Instead of The Noise Model},
      author={Jin, Xin and Xiao, Jia-Wen and Han, Ling-Hao and Guo, Chunle and Liu, Xialei and Li, Chongyi and Cheng, Ming-Ming},
      journal={Arxiv},
      year={2023}
    }

Reproducibility

Industry and research labs are allowed to submit entries and to compete in both the validation phase and the final test phase. However, in order to get officially ranked on the final test leaderboard and to be eligible for awards the reproducibility of the results is a must and, therefore, the participants need to make available and submit their codes or executables. All the top entries will be checked for reproducibility and marked accordingly.

Validation

Start: Jan. 19, 2024, 7:59 a.m.

Description: The online evaluation results must be submitted through this CodaLab competition site of the Challenge.

Final test

Start: March 1, 2024, 7:59 a.m.

Description: The online evaluation results must be submitted through this CodaLab competition site of the Challenge.

Competition Ends

March 6, 2024, 7:59 a.m.

You must be logged in to participate in competitions.

Sign In