Autonomous Navigation of UAVs in Unknown 3D Environments Using Deep Reinforcement Learning for Path Planning

Haoran Han, Jian Cheng, Maolong Lv, Zhilong Xi

Published: 01 Nov 2025, Last Modified: 25 Jan 2026IEEE Transactions on Vehicular TechnologyEveryoneRevisionsCC BY-SA 4.0
Abstract: Much research has been conducted on unmanned aerial vehicles (UAVs) across various fields. Due to the obstacle-ridden environments, there is a strong need for a high-performance autonomous navigation system. Deep reinforcement learning (DRL) has gained significant attention from researchers as a navigation algorithm, but existing works have several limitations. Most assume that the UAV moves with discrete actions or simplified dynamics. Numerous efforts in grid mapping primarily focus on expanding obstacle coverage without addressing the dimensionality curse arising from the constrained processing capacity of the network in handling redundant information. Furthermore, the transferability of the trained agent has rarely been tested. To address these limitations, this paper proposes a multiscale surrounding state, where the proximal and distal surroundings are separately represented by exact and statistical information to balance state coverage and dimensionality. The truncation operation is also introduced to improve the transferability. Lastly, a modular UAV navigation system with a DRL path planner is constructed. The experimental results demonstrate that the proposed autonomous navigation system can function in various unknown environments.
Loading