Context-Sensitive Human Activity Classification in Collaborative Learning Environments

Published: 01 Jan 2018, Last Modified: 14 Nov 2024SSIAI 2018EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Human activity classification remains challenging due to the strong need to eliminate structural noise, the multitude of possible activities, and the strong variations in video acquisition. The current paper explores the study of human activity classification in a collaborative learning environment.This paper explores the use of color based object detection in conjunction with contextualization of object interaction to isolate motion vectors specific to each human activity. The basic approach is to make use of separate classifiers for each activity. Here, we consider the detection of typing, writing, and talking activities in raw videos.The method was tested using 43 uncropped video clips with 620 video frames for writing, 1050 for typing, and 1755 frames for talking. Using simple KNN classifiers, the method gave accuracies of 72.6% for writing, 71% for typing and 84.6% for talking. Classification accuracy improved to 92.5% (writing), 82.5% (typing) and 99.7% (talking) with the use of Deep Neural Networks.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview