Accurate Implicit Neural Mapping with More Compact Representation in Large-scale Scenes Using Ranging DataOpen Website

22 Sept 2023OpenReview Archive Direct UploadReaders: Everyone
Abstract: Large-scale 3D mapping nowadays is a research hotspot in robotics. A greatly concerning issue is reconstructing high-accuracy maps in a hardware environment with limited memory. To address this problem, we propose a novel implicit neural mapping approach with higher accuracy and less memory. It first adopts an improved hierarchical hash encoder, independent of geometric bounding (e.g., bounding box or sphere), for a more compact map representation, and then leverages a spatial hash grid to restrict the encoding space to the proximity of geometric surfaces, preventing hash collisions between encoding in free space and near geometric surfaces. The hash grid indexes the scene point cloud produced by ranging data. Through a tiny MLP, features encoded from sampled points in the hash grid can be converted to truncated signed distance values. To further improve mapping accuracy, a new method is developed to instantly obtain more accurate signed distance labels from ranging data by computing the closest distances from sampled points to the point cloud indexed by the constructed hash grid, not just the distances from sampled points to geometric surfaces along rays, and then use these labels to supervise the learning of our hash encoder. Experimental evaluations performed on large-scale indoor and outdoor datasets demonstrate that our approach achieves stateof- the-art mapping performance with less than half of the memory consumption compared with previous advanced 3D mapping methods using ranging data.
0 Replies

Loading