Low-light Object Detection and Instance Segmentation

Organized by BIT-CV - Current server time: March 30, 2025, 8:18 a.m. UTC

Previous

test-segment
April 23, 2024, midnight UTC

Current

valid-segment
Feb. 20, 2024, midnight UTC

End

Competition Ends
April 30, 2024, 11:59 p.m. UTC

PBDL2024:Low-light Object Detection and Instance Segmentation Challenge

News!

  • The Low-light Object Detection and Instance Segmentation track starts now! We release validation data and training data. Check out the "Get data" page and prepare the submission!

Important dates

  • 2024.02.20 Challenge site online

  • 2024.02.21 Release of train data (paired images) and validation data (inputs only)

  • 2024.03.01 Validation server online

  • 2024.04.23 Final test data release (inputs only)

  • 2024.04.30 Test submission deadline

  • 2024.05.05 Fact sheets and code/executable submission deadline

  • 2024.05.10 Preliminary test and rating results release to participants

Overview

In comparison to well-lit environments, low-light conditions pose significant challenges to maintaining image quality, often resulting in notable degradation such as loss of detail, color distortion, and pronounced noise. These factors detrimentally impact the performance of downstream visual tasks, particularly object detection and instance segmentation. Recognizing the critical importance of overcoming these obstacles, research into low-light object detection and instance segmentation has emerged as a pivotal area within the computer vision community, aiming to accurately localize and classify objects of interest under challenging lighting conditions.

To propel research in this field forward, it is essential to assess proposed methods in real-world scenarios, where lighting conditions and image noise are inherently more complex and diverse. Consequently, we will utilize the Low-light Instance Segmentation (LIS) dataset, introduced by Prof. Fu’s team in [a], captured using a Canon EOS 5D Mark IV camera. The LIS dataset comprises paired images collected across various scenes, encompassing both indoor and outdoor environments. To ensure a comprehensive range of low-light conditions, we utilized different ISO levels (e.g., 800, 1600, 3200, 6400) for long-exposure reference images and deliberately adjusted exposure times using varying low-light factors (e.g., 10, 20, 30, 40, 50, 100) to simulate extremely low-light conditions accurately. Each image pair in the LIS dataset includes instances of common object classes (bicycle, car, motorcycle, bus, bottle, chair, dining table, TV), accompanied by precise instance-level pixel-wise labels. These annotations serve as essential metrics for evaluating the performance of proposed methods in terms of object detection and instance segmentation. We will host the competition using open source online platform, e.g. CodaLab. All submissions are evaluated by our script running on the server and we will double check the results of top-rank methods manually before releasing the final test-set rating.

Submission

The training data is already made available to the registered participants.

General Rules

Please check the terms and conditions for further rules and details.

Reference

[a] Chen, L., Fu, Y., Wei, K., Zheng, D., & Heide, F. (2023). Instance Segmentation in the Dark. International Journal of Computer Vision

Contact Us

Please test the submission process through the validation (val) phase to ensure smooth submission. If there are any bugs, please contact us promptly during the validation phase. Once the testing phase begins, to ensure fairness, we will no longer address issues related to result submissions. You can contact us by sending an email to organizers pbdl.ws@gmail.com with title 'HDR Reconstruction from a Single Raw Image Inquiry'.

Q&A

1.Q: How to align the category IDs and Image IDs?
A:
For mmdetection config, you can organize the config as follows:
```python
test/val = dict(
classes=('bicycle', 'car', 'motorbike', 'bus',
'bottle', 'chair', 'diningtable', 'tvmonitor'),
type='CocoDataset',
ann_file='/dataset/LIS/lis_coco_png_raw_dark_valonly_challenge_noanno.json',
img_prefix='/dataset/LIS/RAW_Dark/val_challenge/',
pipeline=test_pipeline
)
```
The `lis_coco_png_raw_dark_valonly_challenge_noanno.json` can be accessed via the following link:
[Google Drive Link](https://drive.google.com/file/d/1fpDjUX4-vXuFgJMsLSFLRnnsc_QfZhHC/view?usp=drive_link
)

In this way, the category IDs and Image IDs are aligned.

2.Q: How to obtain a .json file for submission?
For mmdetection, you can obtain the JSON file for submission using:
```bash
python ~/code/mmdetection/tools/test.py \
config.py \
checkpoint.pth \
--options "jsonfile_prefix=./test_results" \
--format-only
```

3.Q: How can I obtain a robust baseline model for low-light instance segmentation?
A: You can find one at https://github.com/Linwei-Chen/LIS

Evaluation Criteria

The COCO API is used to evaluate detection results. The software provides features to handle I/O of images, annotations, and evaluation results. Please visit overview for getting started and detections eval page for more evaluation details.

Terms and Conditions

General Rules

The Low-light Object Detection and Instance Segmentation Challenge is one track of PBDL-Challenge, Physics Based Vision meets Deep Learning Workshop 2024, in conjunction with CVPR 2024. Participants are not restricted to train their algorithms only on the provided dataset. Other PUBLIC dataset can be used as well. Participants are expected to develop more robust and generalized methods for low light image enhancement in real-world scenarios.

When participating in the competition, please be reminded that:

  • Results in the correct format must be uploaded to the evaluation server. The evaluation page lists detailed information regarding how results will be evaluated.
  • Each entry must be associated to a team and provide its affiliation.
  • Using multiple accounts to increase the number of submissions and private sharing outside teams are strictly prohibited.
  • The organizer reserves the absolute right to disqualify entries which is incomplete or illegible, late entries or entries that violate the rules.
  • The organizer reserves the right to adjust the competition schedule and rules based on situations.
  • The best entry of each team will be public in the leaderboard at all time.
  • To compete for awards, the participants must fill out a factsheet briefly describing their methods. There is no other publication requirement. 

Terms of Use: Dataset

Before downloading and using the dataset, please agree to the following terms of use. You, your employer and your affiliations are referred to as "User". The organizers and their affiliations, are referred to as "Producer".

  • All e data is used for non-commercial/non-profit research purposes only.
  • All the images in the dataset can be used for academic purposes.
  • The User takes full responsibility for any consequence caused by his/her use of the dataset in any form and shall defend and indemnify the Producer against all claims arising from such uses.
  • The User should NOT distribute, copy, reproduce, disclose, assign, sublicense, embed, host, transfer, sell, trade, or resell any portion of the dataset to any third party for any purpose.
  • The User can provide his/her research associates and colleagues with access to dataset (the download link or the dataset itself) provided that he/she agrees to be bound by these terms of use and guarantees that his/her research associates and colleagues agree to be bound by these terms of use.
  • The User should NOT remove or alter any copyright, trademark, or other proprietary notices appearing on or in copies of the dataset.
  • This agreement is effective for any potential User of the dataset upon the date that the User first accesses the dataset in any form.
  • The Producer reserves the right to terminate the User's access to the dataset at any time.
  • For using the dataset, please consider citing the paper (if any):

@article{2023lis,
    title={Instance Segmentation in the Dark},
    author={Chen, Linwei and Fu, Ying and Wei, Kaixuan and Zheng, Dezhi and Heide, Felix},
    journal={International Journal of Computer Vision},
    volume={131},
    number={8},
    pages={2198-2218},
    year={2023},
    publisher={Springer}
}
@inproceedings{fuying-2021-bmvc,
    title={Crafting Object Detection in Very Low Light},
    author={Yang, Hong and Kaixuan, Wei and Linwei, Chen and Ying, Fu},
    booktitle={British Machine Vision Conference (BMVC)},
    year={2021}
}

Reproducibility 

Industry and research labs are allowed to submit entries and to compete in both the validation phase and the final test phase. However, in order to get officially ranked on the final test leaderboard and to be eligible for awards the reproducibility of the results is a must and, therefore, the participants need to make available and submit their codes or executables. All the top entries will be checked for reproducibility and marked accordingly.

 

 

valid-detection

Start: Feb. 20, 2024, midnight

Description: The online evaluation results must be submitted through this CodaLab competition site of the Challenge.

valid-segment

Start: Feb. 20, 2024, midnight

Description: The online evaluation results must be submitted through this CodaLab competition site of the Challenge.

test-detection

Start: April 23, 2024, midnight

Description: The online evaluation results must be submitted through this CodaLab competition site of the Challenge.

test-segment

Start: April 23, 2024, midnight

Description: The online evaluation results must be submitted through this CodaLab competition site of the Challenge.

Competition Ends

April 30, 2024, 11:59 p.m.

You must be logged in to participate in competitions.

Sign In