GeoNeRF: Generalizing NeRF with Geometry Priors

Mohammad Mahdi Johari 1,2, Yann Lepoittevin 3, François Fleuret 4,2,1
1 Idiap Research Institute    2 EPFL    3 ams Osram    4 University of Geneva   

CVPR 2022



Abstract

We present GeoNeRF, a generalizable photorealistic novel view synthesis method based on neural radiance fields. Our approach consists of two main stages: a geometry reasoner and a renderer. To render a novel view, the geometry reasoner first constructs cascaded cost volumes for each nearby source view. Then, using a Transformer-based attention mechanism and the cascaded cost volumes, the renderer infers geometry and appearance, and renders detailed images via classical volume rendering techniques. This architecture, in particular, allows sophisticated occlusion reasoning, gathering information from consistent source views. Moreover, our method can easily be fine-tuned on a single scene, and renders competitive results with per-scene optimized neural rendering methods with a fraction of computational cost. Experiments show that GeoNeRF outperforms state-of-the-art generalizable neural rendering models on various synthetic and real datasets. Lastly, with a slight modification to the geometry reasoner, we also propose an alternative model that adapts to RGBD images. This model directly exploits the depth information often available thanks to depth sensors.

overview_image



Qualitative comparison with baselines

Hover over image to move the zoomed in patch

Click on reference image to switch to a different image

Reference
GeoNeRF (ours)
IBRNet
MVSNeRF


Video Comparison between GeoNeRF and state of the art methods

Various sequences rendered with GeoNeRF

BibTeX


              @article{johari2022geonerf,
                title={GeoNeRF: Generalizing NeRF with Geometry Priors},
                author={Johari, M. M. and Lepoittevin, Y. and Fleuret, F.},
                journal={Proceedings of the IEEE international conference on Computer Vision and Pattern Recognition (CVPR)},
                year={2022}
              }