hi,
I download the https://github.com/eduardzamfir/NTIRE23-RTSR/blob/master/demo/test.py and test the inference time without modification. I found that the inference time is inconsistent with different enviroment.
(1) RTX3090 + torch1.13(torch verison in requirement.txt), rfdn: 0.076s, imdn: 0.075s
(2) RTX3090 + torch1.8, rfdn: 0.060s, imdn: 0.076s
(3) RTX3090 + NTIRE23-RTSR github, rfdn: 0.055s, imdn: 0.092s
(4) NTIRE22 Report, rfdn: 0.042s, imdn: 0.050s
My test command is "python test.py --scale 2 --lr-dir ../../Dataset/DIV2K_valid_LR/X2/ --model-name rfdn/imdn --submission-id 123456 --repeat 100", and test image shape is 1020x540. I think at least the value of (inference time of rfdn)/(inference time of imdn) should be close in different machine. Could you provide a standard enviroment for us to evaluate our model or give me some advice to handle this problem since the time is significant in the score calculation. Thanks for your reply~
Hi,
we updated the GitHub repo and compute now the runtime of your methods excluding the data loading process. Further, we specify in requirements.txt all necessary libraries with respective versions.
Please check if you still experience differences and let us know.
Best,
- Organizers
hi,
I test the new script with 3090, pytorch1.13
(1) rfdn fp32: 56.9ms fp16: 42.9ms
(2) imdn fp32: 74.1ms fp16: 52.4ms
I think it is reasonable. Thank you!