VAMBAM: View and Motion-based Aspect Models for Distributed Omnidirectional Vision Systems

2001 (modified: 16 Jul 2019)IJCAI 2001Readers: Everyone
Abstract: This paper proposes a new model for gesture recognition. The model, called view and motion -based aspect models (VAMBAM), is an omnidirectional view-based aspect model based on motion-based segmentation. This model realizes location-free and rotation-free gesture recognition with a distributed omnidirectional vision system (DOVS). The distributed vision system consisting of multiple omnidirectional cameras is a prototype of a perceptual information infrastructure for monitoring and recognizing the real world. In addition to the concept of VABAM, this paper shows how the model realizes robust and real-time visual recognition of the DOVS.
0 Replies

Loading