A Down-to-Earth Approach for Camera to World Map Georeferencing Using SfMDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 01 Nov 2023AIPR 2022Readers: Everyone
Abstract: This paper explores georegistration of drone imagery by mapping the collection of images to a satellite-centric view, such as that provided by Google Earth. Typically, Structure-from-Motion (SfM) algorithms produce camera poses that can be expressed in a coordinate system related by a similarity transformation in relation to a global coordinate system. We developed Down-to-Earth (D2E) a georegistration method to remap the collection of camera views from relative to world coordinates using estimated transformations back into latitude, longitude, and altitude. We circumvent the difficulty of 2D-to-3D feature matching by projecting the original drone images onto an estimated local ground plane. Matching between oblique drone images and the satellite map with significant perspective viewpoint differences is a challenge for traditional handcrafted image features. Deep learning based features performed better and were used to successfully estimate a five degree-of-freedom similarity transform that provides a better alignment of drone imagery with the satellite world map. Accuracy of the similarity transform can be improved by using the (reconstructed) 3D point cloud for orthorectification of the feature points. The similarity transform is applied to the set of corrected drone camera poses to reenter world space. We evaluate D2E georegistration accuracy using a set of manually selected ground-truth points and show improvement compared to bundle adjusted cameras, even with global geodesic alignment of the cameras.
0 Replies

Loading