The low light srgb image enhancement track starts now! We release validation data and training data. Check out the "Get data" page and prepare the submission!
2024.02.20 Challenge site online
2024.03.01 Validation server online
2024.04.23 Final test data release (inputs only)
2024.04.30 Test submission deadline
2024.05.05 Fact sheets and code/executable submission deadline
Compared with normal-light images, quality degradation of low-light images captured under terrible lighting conditions is serious due to inevitable environmental or technical constraints, leading to unpleasant visual perception including details degradation, color distortion, and severe noise. These phenomena have a significant impact on the performance of advanced downstream visual tasks, such as image classification, object detection, semantic segmentation [1–4], etc. To mitigate the degradation of image quality, low-light image enhancement has become an important topic in the low-level image processing community to effectively improve visual quality and restore image details.
We will use the low-light image enhancement dataset proposed by Prof. Fu’s team in [a]. They capture a Paired normal/low-light images dataset using a Canon EOS 5D Mark IV camera. The images are captured in a variety of scenes, e.g., museums, parks, streets, landscapes, vehicles, plants, buildings, symbols, and furniture. Among these images, the quantity of outdoor images is almost three times bigger than that of indoor images. It is noteworthy that all the scenes in their dataset are static to ensure that the content of the low-light image and its ground-truth are identical. We will host the competition using open source online platform, e.g. CodaLab. All submissions are evaluated by our script running on the server and we will double check the results of top-rank methods manually before releasing the final test-set rating.
The training data is already made available to the registered participants.
Please check the terms and conditions for further rules and details.
[a] Ying Fu, Yang Hong, Linwei Chen, Shaodi You: LE-GAN: Unsupervised low-light image enhancement network using attention module and identity invariant loss[J]. Knowledge-Based Systems, 2022, 240: 108010.
Please test the submission process through the validation (val) phase to ensure smooth submission. If there are any bugs, please contact us promptly during the validation phase. Once the testing phase begins, to ensure fairness, we will no longer address issues related to result submissions. You can contact us by sending an email to organizers pbdl.ws@gmail.com with title 'Low Light Image Enhancement Inquiry'.
To mitigate the impact of the offset issue in this dataset, we have selected several pairs of images that have not appeared in the training set. These images have the same format, quantity, source, and collection method as the testing set in the final testing phase. Additionally, we will release our scoring program, allowing participants to use the scores obtained from testing on these images as a reference before the final testing phase.
Our evaluation metrics are calculated within the sRGB color space. We assess the performance by measuring the discrepancy between the results and the ground truth images.
We employ the standard Peak Signal To Noise Ratio (PSNR) and the Structural Similarity Index (SSIM) in grayscale, as is commonly used in the literature. The final evaluation metric can be calculated using the following formula:
$$Score=\log_k(SSIM*k^{PSNR})=PSNR-\log_k(SSIM)$$
In our implementation, $k=1.2$.
For the final ranking, we will use the average Score as the primary measure. The complexity of the algorithm will only serve as a reference and will not be included in the final metric. Please refer to the evaluation function in the 'evaluate.py' of the scoring program.
The Low Light Image Enhancement Challenge is one track of PBDL-Challenge, Physics Based Vision meets Deep Learning Workshop 2024, in conjunction with CVPR 2024. Participants are not restricted to train their algorithms only on the provided dataset. Other PUBLIC dataset can be used as well. Participants are expected to develop more robust and generalized methods for low light image enhancement in real-world scenarios.
When participating in the competition, please be reminded that:
Before downloading and using the dataset, please agree to the following terms of use. You, your employer and your affiliations are referred to as "User". The organizers and their affiliations, are referred to as "Producer".
@article{fu2022gan, title={LE-GAN: Unsupervised low-light image enhancement network using attention module and identity invariant loss}, author={Fu, Ying and Hong, Yang and Chen, Linwei and You, Shaodi}, journal={Knowledge-Based Systems}, volume={240}, pages={108010--108020}, year={2022}, publisher={Elsevier} }
Industry and research labs are allowed to submit entries and to compete in both the validation phase and the final test phase. However, in order to get officially ranked on the final test leaderboard and to be eligible for awards the reproducibility of the results is a must and, therefore, the participants need to make available and submit their codes or executables. All the top entries will be checked for reproducibility and marked accordingly.
Start: Feb. 1, 2024, midnight
Description: The online evaluation results must be submitted through this CodaLab competition site of the Challenge.
Start: April 23, 2024, midnight
Description: The online evaluation results must be submitted through this CodaLab competition site of the Challenge.
April 30, 2024, 11:59 p.m.
You must be logged in to participate in competitions.
Sign In