VRtalk

Yuan Yu, Chunlei Xu, Shirao Yang, Yu Cao, Yuyang Wang, Boon Giin Lee

Published: 01 Jan 2025, Last Modified: 15 Jan 2026Proceedings - 2025 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The convergence of virtual reality live streaming and AI-driven avatars has emerged as a significant technological trend. However, current integration attempts remain in the proof-of-concept stage, with the primary challenge of automatic interaction system establishment. To build interactive intelligence anime avatars within VR frameworks, we have developed a multimodal interaction architecture centered on dialogue agents, realizing comprehensive understanding, reasoning, and response. Our approach 1).proposes high granularity explicit-implicit understanding and a dual-center switchable reasoning mechanism to support flexible responses. 2).innovates a dual-source animation mechanism for co-speech face-body visualization and a textual command module for supervising crossmodal animation, and 3).enhances expressiveness through mapping persona, content, voice, and motion to anime style. Experimental results demonstrate the state-of-the-art performance of VRtalk, highlighting its practical significance and future potential.
Loading