Deep recurrent Q-learning for energy-constrained coverage with a mobile robot

Published: 01 Jan 2023, Last Modified: 15 May 2025Neural Comput. Appl. 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this paper, we study the problem of coverage of an environment with an energy-constrained robot in the presence of multiple charging stations. As the robot’s on-board power supply is limited, it might not have enough energy to cover all the points in the environment with a single charge. Instead, it will need to stop at one or more charging stations to recharge its battery intermittently. The robot cannot violate the energy constraint, i.e., visit a location with negative available energy. To solve this problem, we propose a deep Q-learning framework that produces a policy to maximize the coverage and minimize the budget violations. Our proposed framework also leverages the memory of a recurrent neural network (RNN) to better suit this multi-objective optimization problem. We have tested the presented framework within a \(16 \times 16\) grid environment having different charging station layouts and various obstacle configurations. Results show that our proposed method finds feasible solutions while performing favorably against two comparable techniques.
Loading