Abstract: Successful teamwork requires team members to understand each other and communicate effectively, managing multiple linguistic and paralinguistic tasks at once. Because of the potential for interrelatedness of these tasks, it is important to have the ability to make multiple types of predictions on the same dataset. Here, we introduce Multimodal Communication Annotations for Teams (MultiCAT), a speech- and text-based dataset consisting of audio recordings, automated and hand-corrected transcriptions. MultiCAT builds upon collected data for teams working collaboratively to save victims in a simulated search and rescue mission, and consists of annotations and benchmark results for the following tasks: (1) dialog act classification, (2) adjacency pair detection, (3) sentiment and emotion recognition, (4) closed-loop communication detection, and (5) phonetic entrainment detection. We posit that additional work on these tasks and their intersection will further improve understanding of team communication and its relation to team performance.
Paper Type: long
Research Area: Dialogue and Interactive Systems
Contribution Types: Data resources, Data analysis
Languages Studied: English
0 Replies
Loading