AI-assisted affective computing and spatial audio for interactive multimodal virtual environments: research proposal
Abstract: This paper contains the research proposal of Juan Antonio De Rus that was presented at the MMSys 2022 doctoral symposium. The use of virtual reality (VR) is growing every year. With the normalization of remote work it is to expect that the use of immersive virtual environments to support tasks as online meetings, education, etc, will grow even more. VR environments typically include multimodal content formats (synthetic content, video, audio, text) and even multi-sensory stimuli to provide an enriched user experience. In this context, Affective Computing (AC) techniques assisted by Artificial Intelligence (AI) become a powerful means to determine the user's perceived Quality of Experience (QoE). In the field of AC, we investigate a variety of tools to obtain accurate emotional analysis by using AI techniques applied on physiological data. In this doctoral study we have formulated a set of open research questions and objectives on which we plan to generate valuable contributions and knowledge in the field of AC, spatial audio, and multimodal interactive virtual environments, one of which is the creation of tools to automatically evaluate the QoE, even in real-time, which can provide valuable benefits both to service providers and consumers. For data acquisition we use sensors of different quality to study the scalability, reliability and replicability of our solutions, as clinical-grade sensors are not always within the reach of the average user.
0 Replies
Loading