Superpowering Emotion Through Multimodal Cues in Collaborative VR

Published: 01 Jan 2024, Last Modified: 27 May 2025ISMAR 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Representing emotion in collaborative Virtual Reality (VR) environments is an emerging topic, as VR can enable humans to express augmented emotions beyond their normal abilities. This research explores how emotion can be represented in collaborative VR beyond facial expressions. We developed a virtual system that communicates emotion using three sensory modalities (textual, auditory and visual) through two spatiotemporal representations (human-form avatar and superpower). We show real-time emotion through a natural avatar and objectify emotion states into superpower phenomena in in-situ environments for time periods. We incorporated subjective, physiological and behavioural measures to evaluate emotion in an asynchronous VR collaboration scenario. The results suggested that showing emotions through the avatar and superpower augmentations (audio-visual) provided the best immersive VR experience, where users were more aroused, and the positive emotion felt more dominating. We also found that understanding emotion requires easy and relatable visuals that people commonly acknowledge, whereas arousing emotion requires a change of environmental contexts to indicate different states. We provide design insights for using multisensory modalities in empathic VR systems to address the lack of a standardised representation of emotion in collaborative VR.
Loading