The description of the challenge suggests that all models should use mixed-precision or floating-point 16-bit precision exclusively (see the "MP FP 16" part of the quote below). Does this mean that any full-precision (32 or 64-bit) models will not be accepted?
"The goal of this challenge is to upscale videos in real-time at 30/60FPS (30-16ms per frame) using deep learning models and commercial GPUs at MP FP 16 (mainly RTX 3090, and RTX 3060)."
Posted by: qub3k @ Feb. 22, 2024, 11:53 a.m.FP32 and FP16 are allowed. We do not support quantized models yet.
Posted by: nanashi @ March 4, 2024, 11:33 p.m.