NTIRE 2023 Real-Time Super-Resolution - Track 1 (X2) Forum

Go back to competition Back to thread list Post in this thread

> Some question about torch_tensorrt Inference?

In the https://github.com/eduardzamfir/NTIRE23-RTSR/blob/master/demo/runtime_demo.py. I noticed that using torch_tensorrt to calculate the inference time. In the the "Evaluation" page: Methods above 42ms (24FPS) per image will not be considered acceptable in this setup. The time 42ms (24FPS) here refers to torch_tensorrt inference or pytorch fp16 inference?

Posted by: PixelBE @ Feb. 23, 2023, 1:32 a.m.

TensorRT is optional, the main setup is simple FP16.
We expect that methods able to process 30 FPS on FP16, will become 60FPS when we use TensorRT. We detected that some models cannot be converted to tensorRT, therefore we do not impose to use it.

Posted by: nanashi @ Feb. 23, 2023, 12:49 p.m.

is there any constraint for PSNR if the runtime is exactly 42ms?

Posted by: tuvovan @ Feb. 24, 2023, 5:19 a.m.

The only PSNR constrain is to be above Bicubic interpolation. Otherwise the score=0, check the formula in Evaluation: https://codalab.lisn.upsaclay.fr/competitions/10227#learn_the_details-evaluation

Posted by: nanashi @ Feb. 24, 2023, 10:05 a.m.
Post in this thread