S²KAN-SLAM: Elastic Neural LiDAR SLAM With SDF Submaps and Kolmogorov-Arnold Networks

Published: 2025, Last Modified: 05 Nov 2025IEEE Trans. Circuits Syst. Video Technol. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Traditional LiDAR SLAM approaches prioritize localization over mapping, yet high-precision dense maps are essential for numerous applications involving intelligent agents. Recent advancements have introduced methods leveraging neural fields to enhance mapping capabilities; however, these approaches still face several limitations. Firstly, concerning scene representation, they typically employ neural fields with high-dimensional features and multi-layer perceptron decoders utilizing non-continuous activation functions. This results in low learning efficiency and challenges in capturing high-frequency signals. Secondly, in terms of scene organization, these methods often treat the entire scene as a singular neural field, leading to inefficiencies, inflexibility, and difficulties in rectifying accumulated errors when mapping large-scale environments over extended periods. To tackle the first issue, we propose a lightweight continuous SDF regression approach by encoding the scene in single-valued embeddings and decoding SDF values from a Kolmogorov-Arnold Network. By minimizing discrepancies in measuring range, sampling distance, and decoded SDF values, we facilitate iterative frame-to-model tracking and bundle adjustment neural mapping. To mitigate the second challenge, we propose structuring the whole scene into multiple neural SDF submaps. By establishing node-node, node-submap, and loop closure constraints into a global pose graph, the system can create dense neural maps with global consistency across large-scale scenes. Experimental evaluations in both real-world and simulated settings indicate that our system achieves superior mapping completeness and accuracy, enhanced learning efficiency, reduced memory consumption, and greater flexibility compared to its counterparts.
Loading