Hi,
I am trying to submit a sample solution to check if I have the correct format.
However, I got this error:
============================================================================================
WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
Traceback (most recent call last):
File "/tmp/codalab/tmpp2h8ZG/run/program/evaluate.py", line 176, in
handle_error("The input names does not match the groundtruth names")
File "/tmp/codalab/tmpp2h8ZG/run/program/evaluate.py", line 126, in handle_error
raise
RuntimeError: No active exception to reraise
============================================================================================
Can you help me on this?
Posted by: longpham3105 @ May 26, 2022, 4:38 a.m.Dear participant,
the error says "The input names does not match the groundtruth names".
Note, you must follow the exact instructions provided in our webpage ("Making a submission" section).
First, name your prediction file as "predictions.pkl". Then compress it as "the_filename_you_want.zip".
In case these do not solve the problem, you may have changed the structure of the submission template. In this case, please let us know and I will ask my colleague to help you.
Best
So, I need to read the template "predictions.pkl", look for the correct keys, and fill in the bounding boxes and class id.
Please correct me if I am wrong.
Thanks
Posted by: longpham3105 @ May 27, 2022, 2:36 a.m.Dear participant,
I am trying to contact my colleague who will give you the concrete answer for your question, soon. In the meantime, checking the submission template and/or database page (https://chalearnlap.cvc.uab.cat/dataset/43/description/) may help.
Best
Hi, aforementioned colleague here.
The error you are encountering occurs when there is a key mismatch between the provided .pkl file and the groundtruths.
It is very important that every key remains the same as well as the structure of the dictionary that is saved within.
The error is only thrown when a entry exists in the ground truth but not in predictions.pkl, i.e. assuming all entries exists and a wrong additional entry is made in the submission, another error will be thrown detailing exactly which entry is missing.
Like Juliojj suggested i would recommend double checking the structure of your submission, and also remember to pickle the file in your script using the argument "protocol=4" in the pickle.dump() call if you are using a python version higher than 2.7. As the default protocol changed in later versions.
You are correct that all each entry contains is a list of class ids ([1, ...]) and a list of lists with absolute bounding box coordinates [[top_left_x, top_left_y, bot_right_x, bot_right_y] ...]
I can provide an example on how the baseline YoloV5 submission got created.
First we ran the predictions for each image, and used YoloV5 official repo detect.py to create a text file for each image and save them in a folder with the image file name as provided by the Harborfront dataloader, (ex. "20210213_clip_0_0000_0057.txt" [YYYYMMDD_clip_N_HHMM_FRAMENUMBER.txt]). when making the submissions we iterated through a copy of the prediction.pkl provided in the challenge and appended the necesary predictions (if present). Below i have attached the script used to read YoloV5s prediction.txt into the the supplied prediction.pkl
I hope this helps :) if not feel free to ask more questions here (prefered) or mail me at asjo@create.aau.dk
Additionally I will look into adding additional information to the current evaluation and have it return more detailed information of which entry is a mismatch and if it is missing on the evaluation side (i.e an entry for an image is missing) or user side (i.e. a entry with an unknown name has been submitted)
#PYTHON SCRIPT BELOW
import pickle
import os
import csv
imsize = 384,288
input_template = 'predictions.pkl'
output_predictions = 'baseline_predictions.pkl'
yolo_predictions = 'yolov5/runs/detect/exp/labels/'
with open(input_template, 'rb') as f:
predictions_dict = pickle.load(f)
for month, days in predictions_dict.items():
for day, slot in days.items():
#print(slot.items())
#print(day)
boxes = []
labels = []
if os.path.exists(os.path.join(yolo_predictions, day+'.txt')):
with open(os.path.join(yolo_predictions, day+'.txt'), 'r') as d:
reader = csv.reader(d, delimiter=' ')
for row in reader:
cls = int(row[0])
cx = float(row[1])
cy = float(row[2])
w = float(row[3])
h = float(row[4])
#print(cls, cx, cy, w, h)
box = [int((cx-(w/2))*imsize[0]),
int((cy-(h/2))*imsize[1]),
int((cx+(w/2))*imsize[0]),
int((cy+(h/2))*imsize[1])]
boxes.append(box)
labels.append(cls)
else:
print("No predictions for: {}".format(day))
predictions_dict[month][day]['labels'] = labels
predictions_dict[month][day]['boxes'] = boxes
with open(output_predictions, 'wb') as f:
pickle.dump(predictions_dict, f, protocol=4)
Thank you so much for your detailed response.
I will try again based on your guideline.
Hi colleagues,
I have solved the previous problem. Hence, a new one appears.
This time the error is:
"WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
Execution time limit exceeded!"
My submission file is only 175 MB.
Can you provide me some guideline on this issue, like any suggestion on the file size?
Posted by: longpham3105 @ May 30, 2022, 3:18 a.m.It is a result of the Docker container exceeding the allocated Memory.
The baseline submissions are ~50-60 mb, before the challenge i tested with submission files of 120mb which also worked.
My guess is you have a lot of proposal predictions for it to reach 175mb, without knowing exactly what it contains i would look into what kind of precission the values in each entry is saved with, (i.e. uint8, uin64, float32).
If you want me to inspect it could you send a link for me to download the predictions.pkl to my mail. (asjo@create.aau.dk) then i will give it a look and see if i can discern if something is not working as intended?
Dear longpham3105,
regarding the error you got,
"WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
Execution time limit exceeded!"
it may be related with the time it is taking to compute the evaluation score ("Execution time limit exceeded!"). You should be able to submit files up to 300Mb. However, we defined a max time budget to compute the score = 1800sec (30min). High computational time may be related with some codalab instability or a high number of iteration given the submitted predictions. Please, let us know if you continue having this problem so we may need to increase the time budget.
Best
I got things working now.
It seems like when the submitted file <= 150 MB the eval script will run fine.
Posted by: longpham3105 @ June 1, 2022, 3:44 a.m.Dear longpham3105,
I am happy that it is working now.
We also have increased the time budget from 30 to 45 min, to avoid this kind of problem.
Best