Abstract: Projector-camera systems, used in Spatial Augmented Reality, automatically adapt the video projections to the scene objects according to the visualization conditions. This paper introduces a novel non-invasive (without Structured Light) method based on a combination of traditional Feature Matching (FM) and more computationally effective Optical Flow (OF). It requires only one projected and one acquired image at a time in the most difficult case when both projected content and geometric transformations change every frame. It detects scene changes when OF fails and thus should be replaced by FM. In the experiments, we show that the method yields a more precise and less shaky compensation for different types of projected videos, and is up to 2.8 times faster than previous FM-based works.
0 Replies
Loading