Navigating with Less: Reinforcement Learning for UGVs Under Sparse LiDAR Inputs

19 Nov 2025 (modified: 29 Dec 2025)ICC 2025 Workshop RAS SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Unmanned Ground Vehicles, Deep Q-learning, Soft Actor-Critic
Abstract: Autonomous navigation remains a fundamental challenge for unmanned ground vehicles (UGVs) operating in complex and unstructured environments. Existing learning-based solutions typically rely on computationally intensive perception pipelines such as 3D SLAM and PointNet, which are difficult to deploy on resource-constrained platforms. This paper proposes an efficient end-to-end framework using a lightweight 32-bin lidar descriptor with a simple MLP, comparing discrete-action Dueling DQN against continuous-action SAC. Simulation results show that SAC significantly outperforms DQN in terms of success rate, collision avoidance, convergence stability, and control smoothness, demonstrating that algorithmic choice can surpass perception complexity in achieving high-performance navigation on computationally limited UGV platforms.
Submission Number: 15
Loading