Integrating Neural Radiance Fields With Deep Reinforcement Learning for Autonomous Mapping of Small Bodies

Published: 01 Jan 2024, Last Modified: 04 Mar 2025IEEE Trans. Geosci. Remote. Sens. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Surface 3-D mapping of small bodies is crucial for planning in situ deep-space explorations, yet current approaches rely heavily on manual processing and significantly increase time costs. To achieve autonomous and efficient 3-D mapping of small bodies, we propose a visual-based autonomous 3-D mapping framework that integrates neural radiance fields (NeRFs) with deep reinforcement learning (DRL). We introduce a NeRF variant, which is built within the optimization loop of DRL, for instant 3-D mapping and quantitative uncertainty estimation of the global mapping. We employ the uncertainty estimation to optimize the DRL policy, thereby predicting the next best state that minimizes uncertainty in surface mapping. Moreover, we introduce a new six degrees of freedom (6 DoF) simulation environment that simulates the visual-based 3-D mapping process of small bodies, and demonstrate that our framework significantly enhance 3-D mapping quality. Our framework provides a new technological direction for small body exploration missions, potentially enhancing efficiency while also advancing operational reliability.
Loading