Deep Reinforcement Learning Based Online Area Covering Autonomous Robot

Published: 01 Jan 2021, Last Modified: 07 Oct 2024ICARA 2021EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Autonomous area covering robots have been increasingly adopted in for diverse applications. In this paper, we investigate the effectiveness of deep reinforcement learning (RL) algorithms for online area coverage while minimizing the overlap. Through simulation experiments in grid based environments and in the Gazebo simulator, we show that Deep Q-Network (DQN) based algorithms efficiently cover unknown indoor environments. Furthermore, through empirical evaluations and theoretical analysis, we demonstrate that DQN with prioritized experience replay (DQN-PER) significantly minimizes the sample complexity while achieving reduced overlap when compared with other DQN variants. In addition, through simulations we demonstrate the performance advantage of DQN-PER over the state-of-the-art area coverage algorithms, BA* and BSA. Our experiments also indicate that a pre-trained RL agent can efficiently cover new unseen environments with minimal additional sample complexity. Finally, we propose a novel way of formulating the state representation to arrive at an area-agnostic RL agent for efficiently covering unknown environments.
Loading