The Difference Between NeRF And Photogrammetry 3D Scan

2022/11/02 に公開
視聴回数 74,270
0
0
Last week, I made my first 3D scan using Polycam. It uses a technology called photogrammetry to generate a 3D model from a series of photos taken at multiple angles. This 3D model can then be used in AR or VR applications, so that’s why it’s so interesting.
 
Recently a new technology appeared, called NeRF (Neural Radiance Fields) and made a ton of headlines. It’s similar to photogrammetry, because it’s also a way to visualise a 3D scene or object using images as an input. But it differs from photogrammetry a lot.
 
The main difference between these two technologies is that photogrammetry generates a 3D model with meshes and textures and is stored in a way that traditional 3D tools can use it. So we can use it in 3D animation, games or VR and AR applications.

A NeRF generates a ‘radiance field’, instead of a traditional 3D model. So the way the 3D models are stored, is very different.

NeRF uses machine learning to create this “radiance field”. With this, you can render new viewpoints of an object from new angles, so when moving the 3D model around, it appears to be 3D to your eyes. The Radiance Field has learnt and can guess what an object would look like from any angle and renders the image you see on your screen.
 
To give an example: 10 years ago we used a series of images with a slider to make an object on a website appear three dimensional. I remember this cool slider on the apple website, to see the iPod touch from multiple angles. When twirling around, it almost seems like a 3D-model right?

But if I want to see the iPod from a different angle that was not captured by any of the pictures, I am out of luck. With NeRF, we can train the machine learning algorithm and then with the Radiance Field that is created, we can generate images to see the iPod from new perspectives too.

I recreated the iPod touch slider at home and used the images in Luma, this was the result. The original iPod slider images did not work, because there is no background.
 
The nice thing about NeRF is that reflections and light effects can be captured very accurately. Water, glass, and shiny surfaces usually don’t work well with photogrammetry and the traditional 3D model it creates.

Currently, a downside of NeRF is that it’s not easily applied in AR or VR applications yet! However that will improve over time, with better exporting tools and special viewing applications.
 
For my own experimentation, I used Luma and Polycam. I edited the AR scenes with www.wintor.com
Follow me for more insights about augmented, mixed and virtual reality. Bye!