NeRF-based 3D Reconstruction and Orthographic Novel View Synthesis Experiments Using City-Scale Aerial Images
Abstract: City-scale 3D reconstruction of drone images has many benefits in creating dynamic digital twin models for geospatial and remote sensing applications. We experiment with Neural Radiance Fields (NeRF) to generate novel orthorectified views, point clouds, and 3D meshes using our city-scale image dataset captured from drones and crewed aircraft flights in a circular orbit. We report on the impact of using different parameters related to the NeRF network architecture, ray sampling density, and input image view sampling on the quality of the results. We compare these results with traditional Structure from Motion (SfM) techniques and lidar point clouds. NeRFs can generate highly competitive top-down novel views of city environments compared to traditional SfM techniques, but the underlying 3D structure tends to be less accurate with large-scale scenes. NeRFs can also capture more detail, such as side walls of the buildings, compared to lidar data collections. Finally, we propose a patch-based region of interest training approach to generate high-quality novel top-down views of the large city environments more efficiently for georegistration purposes.
Loading