NTIRE 2022 Efficient Super-Resolution Challenge Forum

Go back to competition Back to thread list Post in this thread

> Question/comment about timings

Hi;

How are the timings going to be calculated for the competing methods?

If it is going to be something similar to the one in "https://github.com/ofsoundof/IMDN" which gives the standard baseline method's runtime on titan xp as 0.10secs, there is a glitch with the code it sets

torch.cuda.backedns.benchmark=True which runs an internal optimizer in the gpu to find the best inference method for the given input shape however as the input shape slightly changes it continuously optimizes for the unseen shapes which dominates the duration of the main model inference time.

This is also mentioned in
https://discuss.pytorch.org/t/model-inference-very-slow-when-batch-size-changes-for-the-first-time/44911

and I have seen it in nvidia visual profiler as well

I would like the "official" time scoring code to be published. If the github code is the official one then the code should be corrected.

Thanks

Posted by: deepernewbie @ March 17, 2022, 12:49 p.m.
Post in this thread