DyCoDa: A Multi-modal Data Collection of Multi-user Remote Survival Game Recordings

Published: 01 Jan 2022, Last Modified: 28 Mar 2025SPECOM 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this paper, we present a novel multi-modal multi-party conversation corpus called DyCoDa (Dynamic Conversational Dataset). It consists of remote intensive conversations among three interaction partners within a collaborative problem-solving scenario. The fundamental aim of building up this corpus is to investigate how humans interact with each other online via a video conferencing tool in a cooperation setting and which audio-visual cues are conveyed and perceived during this interaction. Apart from the high-quality audio and video recordings, the depth and infrared information recorded using the Microsoft Azure Kinect is also provided. Furthermore, various self-evaluation questionnaires are used to get socio-demographic information as well as the personality structure and the individual team role. In total, 30 native German-speaking participants have taken part in the experiment carried out at Magdeburg University. Overall, the DyCoDa consists of 10 h of recorded interactions and will be beneficial for researchers in the field of human-computer interaction.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview