NTIRE 2022 High Dynamic Range Challenge - Track 1 Fidelity (low-complexity constrain) Forum

Go back to competition Back to thread list Post in this thread

> about gmacs

Hello,
There is an official script about gmacs:
def get_gmacs_and_params(model, input_size=(1,3,6,1060,1900), print_detailed_breakdown=False):
......
So when calculating the gmacs, the input_size must be (1,3,6,1060,1900)?

Posted by: FangyaLi @ Feb. 19, 2022, 8:38 a.m.

Hello FangyaLi,

The input images are 3 images of 1060x1900x3, that is fixed and thus the GMAc computation normally would be based on a fixed-input size. Any further pre-processing you want to apply to the input images (e.g. gamma correction, downsampling) should happen within the model, and should be accounted in the runtime measurement and number of operations. We have updated the example scripts to further clarify this.

Best regards,
Edu.

Posted by: EPerezPellitero @ Feb. 23, 2022, 4:07 p.m.

Hi, I also have some questions.
The updated toy_model.py script shows:
def LDR2HDR(self, img, expo, gamma=2.2):
#Map the LDR input image to the HDR domain
return img ** gamma / expo
but the io_usage_example.py shows:
image_long_corrected = (((image_long**gamma)*2.0**(-1*floating_exposures[2]))**(1/gamma))
while exposure values and the steps are not fixed in exposures.npy in given datasets. For example, there are [-3, 0, 3] and [-2, ,0, 2].
So I think we should use the exposures.npy instead of fixed exposures values.
However, If I change the input of (imgs, exp), there will be problems in calculating MACs and Parameters. Is there any suggestion about this?

Posted by: FangyaLi @ Feb. 25, 2022, 8:29 a.m.

Hello FangyaLi,

All the pre-processing should be included in the computation of number of operations and in the runtime measurements. You are free to utilize any pre-processing you want or none at all, but in general one data point is fed to the model and the challenge data has a constant size, so it is unlikely that participants need to modify that. In practise, image-based processing is what generally dominates the number of operations, and things like the exposure value normalization will have an almost negligible impact for most of the cases. We have recently updated the related scripts to further clarify its use.

We would like to outline that the scripts we provide are for guidance only, and are just an illustration of how to perform the measurements for a "toy model" and "toy processing", but they are by no means a rigid template that will fit all possible designs, nor prescribe any best practices for how to process the data or create a neural network.

There is no problem if you want fixed value exposures instead of floating exposure, as I said this is just a "toy example" and not a prescription of what participants should do. Please just modify the code accordingly so that it reflects all the computations happening within your model. Generally, the size of the input data remains unchanged and participants just need to ensure that any resizing or similar processing is accounted in the measurements.

Posted by: EPerezPellitero @ Feb. 25, 2022, 4:01 p.m.
Post in this thread