Abstract: Cooperative multi-robot teams need to be able to explore
cluttered and unstructured environments while dealing with
communication dropouts that prevent them from exchanging local
information tomaintain team coordination. Therefore, robots need
to consider high-level teammate intentions during action selection.
In this letter, we present the first Macro Action Decentralized Exploration
Network (MADE-Net) using multi-agent deep reinforcement
learning (DRL) to address the challenges of communication
dropouts during multi-robot exploration in unseen, unstructured,
and cluttered environments. Simulated robot team exploration experiments
were conducted and compared against classical andDRL
methods where MADE-Net outperformed all benchmark methods
in terms of computation time, total travel distance, number of local
interactions between robots, and exploration rate across various
degrees of communication dropouts. A scalability study in 3D
environments showed a decrease in exploration time with MADENet
with increasing team and environment sizes. The experiments
presented highlight the effectiveness and robustness of our method.
Loading