Difference between Photogrammetry and NeRF (Neural Radiance Field)
In photogrammetry the need is to capture as many photographs as possible of an object or a scene of which the reconstruction has to happen. The overlap required varies on the basis of the systems which are used to acquire the images, be that Terrestrial, Satellite or Aerial. After the successful completion of data acquisition, each photo is analyzed to identify the key points and features using algorithms such as SIFT, SURF, etc. These points are then matched between photos, establishing connections and spatial relationships through the process known as structure from motion and algorithms such as FLANN or BoW are used for matching processes. Using these connections, a 3D point cloud is built which is densified using reprojection of the rays to the images back for enriching the sparse data reconstructed in the prior steps. Over this dense point cloud triangulation is done for creating 3D mesh which are further applied with the texture information extracted from the images for a realistic experience.
Now in the case of Neural Radiance field (NeRF) it can mostly be correlated with painting a canvas, where instead of analyzing the features, it analyzes the entire image, learning the relationship between every pixel and the corresponding 3D point in space. As the information provided to NeRF already consists of key points derived from a photogrammetric pipeline used in Colmap, we can think of it as an extension rather than a replacement. Further in NeRF, using those relationship between pixel and 3D point, it creates a radiance field, which is a map of how light interacts with the scene at every point in the image. It trains on these radiance fields to understand how to render a 3D view from any angle by predicting the color and lights for that view. It is like painting a picture where every brushstroke considers the scene’s overall structure and lighting.
Some of the key differences are as follows:
Recommended by LinkedIn
Now if the question arises to choose between them, well you will always need some part of photogrammetry in the background even for NeRF. But, surely it will depend on your need and resources.
Another interesting development that can be thought of as an extension of NeRF is Gaussian splatting, which offers a benefit of efficient representation, adaptive density using anisotropic splats for better geometry adaptation and allows a more accurate rendering as compared to NeRF. We will discuss in detail in upcoming posts.