Abstract: We tackle planning under uncertainty when multiple robots
must proactively plan perception and communication acts, and decide
whether the cost needed to obtain a state estimate is justified by the
benefit of the information obtained. The approach is suitable when observations are costly but, when they do occur, are of high quality and
recover the system’s joint state, either alone or along with communication. Such cases allow one to sidestep the construction of the full joint
belief space, a well known source of intractability in planning. Formulating the problem as a class of Markov decision processes to be solved
over joint states and structured to allow decentralized execution, we give
a suitable Bellman recurrence using macro-actions. We solve for policies
for the individual robots, providing a simulation case study and reporting
on a physical robot implementation. Based on our experience with hard-
ware, we examine some non-idealities identified in practice, proffering
suitable enhancements to the basic model.
Loading