The Low-light Object Detection and Instance Segmentation track starts now! We release validation data and training data. Check out the "Get data" page and prepare the submission!
2024.02.20 Challenge site online
2024.03.01 Validation server online
2024.04.23 Final test data release (inputs only)
2024.04.30 Test submission deadline
2024.05.05 Fact sheets and code/executable submission deadline
In comparison to well-lit environments, low-light conditions pose significant challenges to maintaining image quality, often resulting in notable degradation such as loss of detail, color distortion, and pronounced noise. These factors detrimentally impact the performance of downstream visual tasks, particularly object detection and instance segmentation. Recognizing the critical importance of overcoming these obstacles, research into low-light object detection and instance segmentation has emerged as a pivotal area within the computer vision community, aiming to accurately localize and classify objects of interest under challenging lighting conditions.
To propel research in this field forward, it is essential to assess proposed methods in real-world scenarios, where lighting conditions and image noise are inherently more complex and diverse. Consequently, we will utilize the Low-light Instance Segmentation (LIS) dataset, introduced by Prof. Fu’s team in [a], captured using a Canon EOS 5D Mark IV camera. The LIS dataset comprises paired images collected across various scenes, encompassing both indoor and outdoor environments. To ensure a comprehensive range of low-light conditions, we utilized different ISO levels (e.g., 800, 1600, 3200, 6400) for long-exposure reference images and deliberately adjusted exposure times using varying low-light factors (e.g., 10, 20, 30, 40, 50, 100) to simulate extremely low-light conditions accurately. Each image pair in the LIS dataset includes instances of common object classes (bicycle, car, motorcycle, bus, bottle, chair, dining table, TV), accompanied by precise instance-level pixel-wise labels. These annotations serve as essential metrics for evaluating the performance of proposed methods in terms of object detection and instance segmentation. We will host the competition using open source online platform, e.g. CodaLab. All submissions are evaluated by our script running on the server and we will double check the results of top-rank methods manually before releasing the final test-set rating.
The training data is already made available to the registered participants.
Please check the terms and conditions for further rules and details.
Please test the submission process through the validation (val) phase to ensure smooth submission. If there are any bugs, please contact us promptly during the validation phase. Once the testing phase begins, to ensure fairness, we will no longer address issues related to result submissions. You can contact us by sending an email to organizers pbdl.ws@gmail.com with title 'HDR Reconstruction from a Single Raw Image Inquiry'.
1.Q: How to align the category IDs and Image IDs?
A:
For mmdetection config, you can organize the config as follows:
```python
test/val = dict(
classes=('bicycle', 'car', 'motorbike', 'bus',
'bottle', 'chair', 'diningtable', 'tvmonitor'),
type='CocoDataset',
ann_file='/dataset/LIS/lis_coco_png_raw_dark_valonly_challenge_noanno.json',
img_prefix='/dataset/LIS/RAW_Dark/val_challenge/',
pipeline=test_pipeline
)
```
The `lis_coco_png_raw_dark_valonly_challenge_noanno.json` can be accessed via the following link:
[Google Drive Link](https://drive.google.com/file/d/1fpDjUX4-vXuFgJMsLSFLRnnsc_QfZhHC/view?usp=drive_link
)
In this way, the category IDs and Image IDs are aligned.
2.Q: How to obtain a .json file for submission?
For mmdetection, you can obtain the JSON file for submission using:
```bash
python ~/code/mmdetection/tools/test.py \
config.py \
checkpoint.pth \
--options "jsonfile_prefix=./test_results" \
--format-only
```
3.Q: How can I obtain a robust baseline model for low-light instance segmentation?
A: You can find one at https://github.com/Linwei-Chen/LIS
The Low-light Object Detection and Instance Segmentation Challenge is one track of PBDL-Challenge, Physics Based Vision meets Deep Learning Workshop 2024, in conjunction with CVPR 2024. Participants are not restricted to train their algorithms only on the provided dataset. Other PUBLIC dataset can be used as well. Participants are expected to develop more robust and generalized methods for low light image enhancement in real-world scenarios.
When participating in the competition, please be reminded that:
Before downloading and using the dataset, please agree to the following terms of use. You, your employer and your affiliations are referred to as "User". The organizers and their affiliations, are referred to as "Producer".
@article{2023lis,
title={Instance Segmentation in the Dark},
author={Chen, Linwei and Fu, Ying and Wei, Kaixuan and Zheng, Dezhi and Heide, Felix},
journal={International Journal of Computer Vision},
volume={131},
number={8},
pages={2198-2218},
year={2023},
publisher={Springer}
}
@inproceedings{fuying-2021-bmvc,
title={Crafting Object Detection in Very Low Light},
author={Yang, Hong and Kaixuan, Wei and Linwei, Chen and Ying, Fu},
booktitle={British Machine Vision Conference (BMVC)},
year={2021}
}
Industry and research labs are allowed to submit entries and to compete in both the validation phase and the final test phase. However, in order to get officially ranked on the final test leaderboard and to be eligible for awards the reproducibility of the results is a must and, therefore, the participants need to make available and submit their codes or executables. All the top entries will be checked for reproducibility and marked accordingly.
Start: Feb. 20, 2024, midnight
Description: The online evaluation results must be submitted through this CodaLab competition site of the Challenge.
Start: Feb. 20, 2024, midnight
Description: The online evaluation results must be submitted through this CodaLab competition site of the Challenge.
Start: April 23, 2024, midnight
Description: The online evaluation results must be submitted through this CodaLab competition site of the Challenge.
Start: April 23, 2024, midnight
Description: The online evaluation results must be submitted through this CodaLab competition site of the Challenge.
April 30, 2024, 11:59 p.m.
You must be logged in to participate in competitions.
Sign In