Abstract: Autonomous area covering robots are being increasingly deployed in residential and commercial settings for a variety of purposes. These robots usually employ universal area covering algorithms to cover indoor environments. The performance of such algorithms heavily depends on room geometry as well as obstacle location, and often suffers from significant overlap leading to inordinately long coverage time, especially in realistic unknown environments with dynamic obstacles. Hence, deploying smarter algorithms that adapt to the environment can improve the performance significantly. In this study, we explore deep reinforcement learning (RL) algorithms for efficient coverage in unknown environments with multiple dynamic obstacles. Through experiments in grid-based environments and Gazebo simulator, we demonstrate the superior performance of RL based coverage algorithms in environments with dynamic obstacles. The performance of RL based algorithm is compared with the BA* algorithm with dynamic re-planning to demonstrate the advantages of the former over one-shot algorithms. Further, by employing transfer learning the trained RL agent learns to cover unseen unknown environments with minimal additional sample complexity. Importantly, we show that RL agents trained in smaller environments can be deployed for coverage in larger unknown environments with marginal additional sample complexity.
Loading