Navigating to Success in Multi-Modal Human-Robot Collaboration
Abstract: Human-guided robotic exploration is a useful approach
to gathering information at remote locations, especially
those that might be too risky, inhospitable, or inaccessible for
humans. Maintaining common ground between the remotelylocated
partners is a challenge, one that can be facilitated by
multi-modal communication. In this paper, we explore how
participants utilized multiple modalities to investigate a remote
location with the help of a robotic partner. Participants issued
spoken natural language instructions and received from the
robot: text-based feedback, continuous 2D LIDAR mapping,
and upon-request static photographs. We noticed that different
strategies were adopted in terms of use of the modalities,
and hypothesize that these differences may be correlated
with success at several exploration sub-tasks. We found that
requesting photos may have improved the identification and
counting of some key entities (doorways in particular) and
that this strategy did not hinder the amount of overall area
exploration. Future work with larger samples may reveal the
effects of more nuanced photo and dialogue strategies, which
can inform the training of robotic agents. Additionally, we
announce the release of our unique multi-modal corpus of
human-robot communication in an exploration context: SCOUT,
the Situated Corpus on Understanding Transactions.
Loading