We don't have 3090 card but we would like to take part in this challenge. Can the organizers help to test model running time during the development stage?
Posted by: Noah_TerminalVision @ Feb. 13, 2023, 11:40 a.m.Hello, we will provide an equivalence table of runtimes for these baseline models: IMDN and RFDN (https://github.com/ofsoundof/NTIRE2022_ESR) see the model zoo. We will include in this table 3090, A100, RTX 5000, we hope to provide it asap.
Unfortunately we don't currently support running dozens of submissions at the same time into the server, we do it manually -as it will be during the final test phase-.
Some references to compare performance:
- https://mtli.github.io/gpubench/
- https://www.aime.info/en/blog/deep-learning-gpu-benchmarks-2022/
In general the RTX 3090 (24Gb) should behave like a V100. If you develop models for "smaller" GPUs (e.g. RTX 2080 Ti), is great too, probably you will be very efficient and reach the famous 16ms per image :)
Also, note that runtime results of the NTIRE 2022 Efficient SR Challenge, are measured on Titan Xp.