A touchable virtual screen interaction system with handheld Kinect camera

Published: 01 Jan 2013, Last Modified: 15 Nov 2024WCSP 2013EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In recent years, augmented reality (AR) has been a research hotpot. It emphasizes that virtual objects and real scene are merged with the correct perspective relationship. However, most of existing AR systems simply adds virtual objects on real scene image. It means that if a real hand is in front of the virtual object, the virtual object is still cover the hand with wrong occlusion relationship. On the other hand, most systems just present virtual information, preventing user from touching or interacting with the virtual object.. In this paper, we build an AR system achieving mutual occlusion and interaction. Instead of using structure from motion method, we make use of the depth image of Kinect camera to integrate a 3D space volume and track the camera pose as well. Then we propose a layer rendering method according to the depth relationship to implement mutual occlusion fusion. At last, we fulfill collision detection and interaction through a fast voxel detection method. We define the neighbor cube for every virtual surface point and implement the detection in parallel with GPU.
Loading