RGBW Joint Remosaic and Denoise @MIPI-challenge

Organized by jnjaby - Current server time: March 30, 2025, 9:03 a.m. UTC

First phase

Validation
May 15, 2022, 7 a.m. UTC

End

Competition Ends
July 21, 2022, 6:59 a.m. UTC

RGBW Joint Remosaic and Denoise @MIPI-challenge

News!

  • The test phase starts! We release the input data for testing. Check out the "Get data" page and prepare the final submission!
  • We provide an alternative external link for data accessing in case you get no response from "Participate -> Get data" tab. Note that you will still need to register for official participation.

Important dates

  • 2022.04.08 Challenge site online
  • 2022.05.15 Release of train data (paired images) and validation data (inputs only)
  • 2022.05.15 Validation server online
  • 2022.06.25 2022.07.13 Final test data release (inputs only)
  • 2022.07.02 2022.07.20 Test output results submission deadline
  • 2022.07.02 2022.07.20 Fact sheets and code/executable submission deadline
  • 2022.07.12 2022.07.30 Preliminary test and rating results release to participants

Overview

RGBW is a new type of CFA pattern designed for image quality enhancement under low light conditions. Thanks to the higher optical transmittance of white pixels over conventional red, green and blue pixels, the signal-to-noise (SNR) ratio of the sensor output becomes significantly improved, thus boosting the image quality especially under low light conditions.

On the other hand, conventional camera ISPs can only work with bayer patterns, thereby requiring an interpolation procedure to convert RGBW to a bayer pattern. The interpolation process is usually referred to as remosaic, and a good remosaic algorithm should be able (1) to get a bayer output from RGBW with least artifacts, and (2) to fully take advantage of the SNR and resolution benefit of white pixels.

The remosaic problem becomes more challenging when the input RGBW becomes noisy, especially under low light conditions. A joint remosaic and denoise task is thus in demand for real world applications

In order to visualize the bayer, we provide a simple ISP (shown below) including white balance correction, demosaic, color correction, gamma correction and so on. To evaluate the image quality (IQ) of the output bayer, we employ several publicly available IQ metrics, including PSNR, SSIM, KL-divergence, and LPIPS[1]. PSNR, SSIM and LPIPS[1] are calulated on the RGB image from the bayer based on the provided simple ISP, and KL-divergence (KLD) is estimated on the bayer directly.

We hold this RGBW joint remosaic and denoise challenge in conjunction with MIPI-Challenge which will be held on ECCV'22. We are seeking an efficient and high-performance remosaic algorithm to get bayer from RGBW.

More details are found in the data section of the competition.

Submission

The training data is already made available to the registered participants.

General Rules

Please check the terms and conditions for further rules and details.

Reference

[1] Zhang et,al. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. 2018 CVPR. [Code]

Contact Us

If you have any questions, please feel free to post threads on 'Forum' tab and discuss related topics. You could also contact us by sending an email to organizers mipi.challenge@gmail.com with title 'RGBW Joint Remosaic and Denoise Challenge Inquiry'.

Evaluation Criteria

The evaluation consists of (1) the comparison of the remosaic output (bayer) with the reference ground truth bayer, and (2) the comparison of RGB from the predicted and ground truth bayer using a simple ISP (the code of the simple ISP is provided).

We use

  • Peak Signal To Noise Ratio (PSNR)
  • Structural Similarity Index Measure (SSIM)
  • KL divergence (KLD)
  • Learned perceptual image patch similarity (LIPIS)

to evaluate the remosaic performance. The PNSR, SSIM and LIPIS will be applied to the RGB from the bayer using the provided simple ISP code, while KL divergence is evaluated on the predicted bayer directly.

A metric weighting PSNR, SSIM, KL_divergence, and LIPIS is used to give the final ranking of each method, and we will report each metric separately as well. The code to calculate the metrics is provided. The weighted metric is shown below. The M4 score is between 0 and 100, and the higher the score, the better the overall image quality.

In the final test phase, we will run your algorithm in the docker on our GPU server. While the running time of your algorithm is not used for ranking, we limit the running time of each image to be within 5 minutes. To ensure your algorithm can run smoothly in our docker, we will release the docker image shortly for your development purpose. The docker image can be found at site.

Submission

During the development phase, the participants can submit their results on the validation set to the CodaLab server. The validation set should only be used for evaluation and analysis purposes but NOT for training. At the testing phase, the participants will submit the whole restoration results of the test set. This should match the latest submission to the CodaLab.

Terms and Conditions

General Rules

The RGBW Joint Remosaic and Denoise Challenge is one track of MIPI-Challenge, Mobile Intelligent Photography & Imaging Workshop 2022, in conjunction with ECCV 2022. Participants are restricted to train their algorithms on the provided dataset. Participants are expected to develop more robust and generalized methods for the RGBW remosaic task in real-world scenarios.

When participating in the competition, please be reminded that:

  • Results in the correct format must be uploaded to the evaluation server. The evaluation page lists detailed information regarding how results will be evaluated.
  • Each entry must be associated to a team and provide its affiliation.
  • Using multiple accounts to increase the number of submissions and private sharing outside teams are strictly prohibited.
  • The organizer reserves the absolute right to disqualify entries which is incomplete or illegible, late entries or entries that violate the rules.
  • The organizer reserves the right to adjust the competition schedule and rules based on situations.
  • The best entry of each team will be public in the leaderboard at all time.
  • To compete for awards, the participants must fill out a factsheet briefly describing their methods. There is no other publication requirement.

Terms of Use: Dataset

Before downloading and using the dataset, please agree to the following terms of use. You, your employer and your affiliations are referred to as "User". The organizers and their affiliations, are referred to as "Producer".

  • All the data is used for non-commercial/non-profit research purposes only.
  • All the images in the dataset can be used for academic purposes.
  • The User takes full responsibility for any consequence caused by his/her use of the dataset in any form and shall defend and indemnify the Producer against all claims arising from such uses.
  • The User should NOT distribute, copy, reproduce, disclose, assign, sublicense, embed, host, transfer, sell, trade, or resell any portion of the dataset to any third party for any purpose.
  • The User can provide his/her research associates and colleagues with access to dataset (the download link or the dataset itself) provided that he/she agrees to be bound by these terms of use and guarantees that his/her research associates and colleagues agree to be bound by these terms of use.
  • The User should NOT remove or alter any copyright, trademark, or other proprietary notices appearing on or in copies of the dataset.
  • This agreement is effective for any potential User of the dataset upon the date that the User first accesses the dataset in any form.
  • The Producer reserves the right to terminate the User's access to the dataset at any time.
  • For using or citing the dataset, please contact yangqingyu@sensebrain.site

Reproducibility

industry and research labs are allowed to submit entries and to compete in both the validation phase and the final test phase. However, in order to get officially ranked on the final test leaderboard and to be eligible for awards the reproducibility of the results is a must and, therefore, the participants need to make available and submit their codes or executables. All the top entries will be checked for reproducibility and marked accordingly.

Validation

Start: May 15, 2022, 7 a.m.

Description: The online evaluation results must be submitted through this CodaLab competition site of the Challenge.

Final test

Start: July 14, 2022, 7 a.m.

Description: The online evaluation results must be submitted through this CodaLab competition site of the Challenge.

Competition Ends

July 21, 2022, 6:59 a.m.

You must be logged in to participate in competitions.

Sign In