Abstract: In this paper, we present a novel multi-modal multi-party conversation corpus called DyCoDa (Dynamic Conversational Dataset). It consists of remote intensive conversations among three interaction partners within a collaborative problem-solving scenario. The fundamental aim of building up this corpus is to investigate how humans interact with each other online via a video conferencing tool in a cooperation setting and which audio-visual cues are conveyed and perceived during this interaction. Apart from the high-quality audio and video recordings, the depth and infrared information recorded using the Microsoft Azure Kinect is also provided. Furthermore, various self-evaluation questionnaires are used to get socio-demographic information as well as the personality structure and the individual team role. In total, 30 native German-speaking participants have taken part in the experiment carried out at Magdeburg University. Overall, the DyCoDa consists of 10 h of recorded interactions and will be beneficial for researchers in the field of human-computer interaction.
Loading