Hi, I'm still figuring out how to use MMDetection just to understand the submission format. I managed to install MMDetection library, but the explanation from Track 1's readme (https://github.com/AICyberTeam/DFC2023-baseline/tree/main/track1) is not enough to run anything.
There's no checkpoint file `checkpoint/mask_rcnn_r50_fpn_roof_fine/latest.pth` to run a test.
There's missing annotation (or scripts to produce those annotations) e.g. in `DFC2023-baseline/track1/configs/mask_rcnn_r50_fpn_roof_fine_rgb.py` it asks for `roof_fine_train_rgb.json` and `roof_fine_val_rgb.json`.
I hope you can provide clarity on how to run the baseline, as stated in the description.
Posted by: Sandhi @ Jan. 6, 2023, 10:06 p.m.Thanks for your advice!
Sorry for the missing annotation of 'roof_fine_train_rgb.json' and 'roof_fine_val_rgb.json'. Actually, the json files with 'rgb' suffix are same as ones without suffix. We have fixed this issue in the github page. You can re-download the config files.
For phase 1, we do not provide the 'roof_fine_val.json' annotation. You can only generate the submission format using the 'image_id_val.json'.
To train the baseline checkpoints using MMDetection, you can run the following command:
python tools/train.py $CONFIG --work-dir $CHECKPOINT_DIR
We recommend to read the MMDetection document for more details: https://mmdetection.readthedocs.io/en/latest/
In the test phase, annotations for the validation set (roof_fine_val.json) will be released along with test images?
Posted by: venkanna37 @ March 7, 2023, 10:21 a.m.