What does HackerNews think of instant-ngp?

Instant neural graphics primitives: lightning fast NeRF and more

Language: Cuda

instant-ngp ([1]) from NVIDIA can render NeRF in VR in real-time, assuming a very good desktop video card. Note that instant-ngp is not as photo-realistic as Zip-NeRF. But it's still very good!

1. https://github.com/NVlabs/instant-ngp

Check out https://github.com/NVlabs/instant-ngp.

Trains a nerf in a couple of seconds.

I went on a trip to Italy last year and took a video while walking around Michelangelo’s David. Even with that relatively poor image quality this let me turn it into a pretty high quality nerf.

In my experience NeRF tends to work better for objects that photogrammetry struggles with, like transparent objects. By using marching cubes you can also export scenes as a mesh. instant-ngp (https://github.com/NVlabs/instant-ngp) has done an amazing job of making NeRF accessible, but you still need camera positions from other software such as COLMAP.
This is great, and the paper+codebase they're referring to (but not linking, here [1]) is neat too.

The research is moving fast though, so if you want something almost as fast without specialized CUDA kernels (just plain pytorch) you're in luck: https://github.com/apchenstu/TensoRF

As a bonus you also get a more compact representation of the scene.

[1] https://github.com/NVlabs/instant-ngp

Still, NVIDIA's achievement (and Thomas Müller in particular) is amazing. Thomas and his collaborators achieved an almost 1000x performance improvement, by a combination of algorithmical and implementation tricks.

I highly recommend trying this at home:

https://nvlabs.github.io/instant-ngp/

https://github.com/NVlabs/instant-ngp

Very straightforward and gives better insight into what NeRF is than any shiny marketing demo.