Neural Radiance Fields (NeRF)

To look into: https://developer.nvidia.com/blog/getting-started-with-nvidia-instant-nerfs/

Damn I work at NVIDIA. I should be at the forefront of this.

FUNDAMENTAL TEACHING: https://sites.google.com/berkeley.edu/nerf-tutorial/home

I should be familiar with Raytracing

Other nooby tutorials?

Original paper: https://arxiv.org/abs/2003.08934

This idea is so cool, where you have a single image and you can move it. I think this was the catalyst to many of the fly-through things we see today that Google is working on.

Also, I think this is what Waabi is doing.

https://datagen.tech/guides/synthetic-data/neural-radiance-field-nerf/

So how does it work?

Input of NeRF is 5D data

  • input is a single continuous 5D coordinate (spatial location and viewing direction )
  • output is the volume density and view-dependent emitted radiance at that spatial location

They do ray marching.

How do they estimate motion? Ahh, they use COLMAP:

  • ”… and use the COLMAP structure-from-motion package [39] to estimate these parameters for real data)”

Nerfies

There’s blurry images usually with nerfs if the thing is moving, however there was a paper that came out 3 years ago that addresses this

https://www.youtube.com/watch?v=IDMiMKWucaI