To NeRF or not to NeRF: A View Synthesis Challenge for Human Heads @ ICCV 2023 Forum

Go back to competition Back to thread list Post in this thread

> Could you share more information about how you train the TensoRF baseline?

This is a response to aarush98’s question from another thread, as a separate topic.

Hi aarush98,

The NeRF model is quite sensitive to the object bounds (near and far). We found that different models work with different bounds settings. For example, the vanilla NeRF model works with our provided near (0.8) and far (5.0) bounds. However, we had to find different bounds for different models, such as TensoRF with two different bounds (scene bounds= [3.5, 7.0], and object_bounds=[0.4, 2.8]), and DirectVoxGO with 1 and 3 for near and far bounds.

We suggest testing within 0 to 5-ish bounds values to get initial parameters depending on your model. For TensoRF, we followed these steps:

1. Make a configs file for the ILSH dataset with some parameters we used (dataset_name = llff, downsample_train = 1.0, ndc_ray = 0, n_iters = 50000, n_lamb_sigma = [16,4,4], n_lamb_sh = [48,12,12], shadingMode = MLP_Fea, fea2denseAct = relu, view_pe = 0, fea_pe = 0, TV_weight_density = 1.0, TV_weight_app = 1.0).

2. Set the scene bound (near far=[3.5, 7.0]) and object_bound (near far=[0.4, 2.8]).
Scene-bound: https://github.com/apchenstu/TensoRF/blob/main/dataLoader/llff.py#L142
Object-bound: https://github.com/apchenstu/TensoRF/blob/main/dataLoader/llff.py#L159

3. As we set ndc_ray=0, we commented out some relevant lines.
In the LLFF loader:
https://github.com/apchenstu/TensoRF/blob/main/dataLoader/llff.py#L218
In train file: https://github.com/apchenstu/TensoRF/blob/main/train.py#L255-L258

4. Use the provided border masks to only train in valid image regions (mask = 1)

5. Disable pose centering: https://github.com/apchenstu/TensoRF/blob/main/dataLoader/llff.py#L172

These are some instructions to help you replicate the baseline and get started. However, please note that we expect participants to provide a novel solution, not just simply replicate our baseline and fine-tune it.

I hope this helps!

Best,
Young

Posted by: youngkyoonjang @ June 12, 2023, 10:45 a.m.

Hi Young,

Thanks for sharing the implementation details.
I trained the model on my own machine according to the provided baseline, but found that there were always some gaps in the performance of the model compared to the baseline.
I wonder if other parameters such as batchsize, upsamp_list, update_AlphaMask_list cause this problem, can you provide more details?

Greetings,
Jerry Zhu.

Posted by: jerry_zhu @ June 26, 2023, 9:29 a.m.

Hi Jerry,

Thank you for your question and participation! Since we only provide step-by-step instructions and not the actual code, your results may vary. We expect participants to come up with novel solutions, rather than simply replicating and fine-tuning the baseline using TensoRF. Identifying a completely new baseline method that outperforms our baseline could be one potential option for winning, but we do not recommend replicating the same baseline that others have already produced on the same leaderboard. Providing more details about our baseline implementation may result in similar marginal improvements, which is not the main goal of this workshop and challenge. We aim to encourage proposing diverse approaches, rather than just a method or direction guided by us. Therefore, we have decided not to release any further information. This workshop does not restrict any methods; you can either train a general model, a scene-specific model, or use other methods to provide an additional input source. This is not limited to NeRF-based methods but also opens to other potential methods. Thank you for understanding.

Posted by: youngkyoonjang @ June 26, 2023, 10:20 a.m.
Post in this thread