Abstract: Human-robot interaction necessitates more than robust people detection and tracking. It relies on the integration of disparate scene information from tracking and recognition systems combined and infused with current and prior knowledge to facililtate robotic understanding and interaction with humans and the environment. In this work we will discuss our efforts in the development and integration of visual scene processing systems for the purpose of enhancing human robotic interaction. Our latest efforts in integrating 3D scene information for the production of novel information sources will be discussed and demonstrated. We show the integration of facial pose and pointing gestures to localize the diectic gesture to a single point in space. Additionally, we will discuss our efforts in integrating Markov logic networks for high level reasoning with computer vision systems to facilitate scene understanding.
0 Replies
Loading