Abstract: Recognition of user activities is a key issue for context-aware computing. We present a method for recognition of user daily activities using gaze motion features and image-based visual features. Gaze motion features dominate for inferring the user's egocentric context whereas image-based visual features dominate for recognition of the environments and the target objects. The experimental results show the fusion of those different type of features improves performance of user daily activity recognition.
0 Replies
Loading