Low Light SRGB Image Enhancement

Organized by BIT-CV - Current server time: Sept. 3, 2025, 12:11 a.m. UTC

First phase

validation
Feb. 1, 2024, midnight UTC

End

Competition Ends
April 30, 2024, 11:59 p.m. UTC

PBDL2024:Low-light SRGB Image Enhancement Challenge

News!

  • The low light srgb image enhancement track starts now! We release validation data and training data. Check out the "Get data" page and prepare the submission!

Important dates

  • 2024.02.20 Challenge site online

  • 2024.02.21 Release of train data (paired images) and validation data (inputs only)

  • 2024.03.01 Validation server online

  • 2024.04.23 Final test data release (inputs only)

  • 2024.04.30 Test submission deadline

  • 2024.05.05 Fact sheets and code/executable submission deadline

  • 2024.05.10 Preliminary test and rating results release to participants

Overview

Compared with normal-light images, quality degradation of low-light images captured under terrible lighting conditions is serious due to inevitable environmental or technical constraints, leading to unpleasant visual perception including details degradation, color distortion, and severe noise. These phenomena have a significant impact on the performance of advanced downstream visual tasks, such as image classification, object detection, semantic segmentation [1–4], etc. To mitigate the degradation of image quality, low-light image enhancement has become an important topic in the low-level image processing community to effectively improve visual quality and restore image details.

We will use the low-light image enhancement dataset proposed by Prof. Fu’s team in [a]. They capture a Paired normal/low-light images dataset using a Canon EOS 5D Mark IV camera. The images are captured in a variety of scenes, e.g., museums, parks, streets, landscapes, vehicles, plants, buildings, symbols, and furniture. Among these images, the quantity of outdoor images is almost three times bigger than that of indoor images. It is noteworthy that all the scenes in their dataset are static to ensure that the content of the low-light image and its ground-truth are identical. We will host the competition using open source online platform, e.g. CodaLab. All submissions are evaluated by our script running on the server and we will double check the results of top-rank methods manually before releasing the final test-set rating.

Submission

The training data is already made available to the registered participants.

General Rules

Please check the terms and conditions for further rules and details.

Reference

[a] Ying Fu, Yang Hong, Linwei Chen, Shaodi You: LE-GAN: Unsupervised low-light image enhancement network using attention module and identity invariant loss[J]. Knowledge-Based Systems, 2022, 240: 108010.

Contact Us

Please test the submission process through the validation (val) phase to ensure smooth submission. If there are any bugs, please contact us promptly during the validation phase. Once the testing phase begins, to ensure fairness, we will no longer address issues related to result submissions. You can contact us by sending an email to organizers pbdl.ws@gmail.com with title 'Low Light Image Enhancement Inquiry'.

Q&A

To mitigate the impact of the offset issue in this dataset, we have selected several pairs of images that have not appeared in the training set. These images have the same format, quantity, source, and collection method as the testing set in the final testing phase. Additionally, we will release our scoring program, allowing participants to use the scores obtained from testing on these images as a reference before the final testing phase.

[Baidu Netdisk] [OneDrive]

 

Evaluation Criteria

Our evaluation metrics are calculated within the sRGB color space. We assess the performance by measuring the discrepancy between the results and the ground truth images.

We employ the standard Peak Signal To Noise Ratio (PSNR) and the Structural Similarity Index (SSIM) in grayscale, as is commonly used in the literature. The final evaluation metric can be calculated using the following formula:

 

$$Score=\log_k(SSIM*k^{PSNR})=PSNR-\log_k(SSIM)$$

In our implementation, $k=1.2$.

For the final ranking, we will use the average Score as the primary measure. The complexity of the algorithm will only serve as a reference and will not be included in the final metric. Please refer to the evaluation function in the 'evaluate.py' of the scoring program.

Submission

During the development phase, the participants can submit their results on the validation set to the CodaLab server. The validation set should only be used for evaluation and analysis purposes but NOT for training. At the testing phase, the participants will submit the whole restoration results of the test set. This should match the last submission to the CodaLab.

Terms and Conditions

General Rules

The Low Light Image Enhancement Challenge is one track of PBDL-Challenge, Physics Based Vision meets Deep Learning Workshop 2024, in conjunction with CVPR 2024. Participants are not restricted to train their algorithms only on the provided dataset. Other PUBLIC dataset can be used as well. Participants are expected to develop more robust and generalized methods for low light image enhancement in real-world scenarios.

When participating in the competition, please be reminded that:

  • Results in the correct format must be uploaded to the evaluation server. The evaluation page lists detailed information regarding how results will be evaluated.
  • Each entry must be associated to a team and provide its affiliation.
  • Using multiple accounts to increase the number of submissions and private sharing outside teams are strictly prohibited.
  • The organizer reserves the absolute right to disqualify entries which is incomplete or illegible, late entries or entries that violate the rules.
  • The organizer reserves the right to adjust the competition schedule and rules based on situations.
  • The best entry of each team will be public in the leaderboard at all time.
  • To compete for awards, the participants must fill out a factsheet briefly describing their methods. There is no other publication requirement.

Terms of Use: Dataset

Before downloading and using the dataset, please agree to the following terms of use. You, your employer and your affiliations are referred to as "User". The organizers and their affiliations, are referred to as "Producer".

  • All the data is used for non-commercial/non-profit research purposes only.
  • All the images in the dataset can be used for academic purposes.
  • The User takes full responsibility for any consequence caused by his/her use of the dataset in any form and shall defend and indemnify the Producer against all claims arising from such uses.
  • The User should NOT distribute, copy, reproduce, disclose, assign, sublicense, embed, host, transfer, sell, trade, or resell any portion of the dataset to any third party for any purpose.
  • The User can provide his/her research associates and colleagues with access to dataset (the download link or the dataset itself) provided that he/she agrees to be bound by these terms of use and guarantees that his/her research associates and colleagues agree to be bound by these terms of use.
  • The User should NOT remove or alter any copyright, trademark, or other proprietary notices appearing on or in copies of the dataset.
  • This agreement is effective for any potential User of the dataset upon the date that the User first accesses the dataset in any form.
  • The Producer reserves the right to terminate the User's access to the dataset at any time.
  • For using the dataset, please consider citing the paper (if any):
@article{fu2022gan,
  title={LE-GAN: Unsupervised low-light image enhancement network using attention module and identity invariant loss},
  author={Fu, Ying and Hong, Yang and Chen, Linwei and You, Shaodi},
  journal={Knowledge-Based Systems},
  volume={240},
  pages={108010--108020},
  year={2022},
  publisher={Elsevier}
}

Reproducibility

Industry and research labs are allowed to submit entries and to compete in both the validation phase and the final test phase. However, in order to get officially ranked on the final test leaderboard and to be eligible for awards the reproducibility of the results is a must and, therefore, the participants need to make available and submit their codes or executables. All the top entries will be checked for reproducibility and marked accordingly.

 

validation

Start: Feb. 1, 2024, midnight

Description: The online evaluation results must be submitted through this CodaLab competition site of the Challenge.

test

Start: April 23, 2024, midnight

Description: The online evaluation results must be submitted through this CodaLab competition site of the Challenge.

Competition Ends

April 30, 2024, 11:59 p.m.

You must be logged in to participate in competitions.

Sign In