Abstract: In this article, we present a public human activity dataset called ‘HAD-AW’. It consists of four types of 3D sensory signals: acceleration, angular velocity, rotation displacement, and gravity for 31 activities of daily living ADL measured by a wearable smart watch. It is created as a benchmark for algorithms comparison. We succinctly survey some existing datasets and compare them to ‘HAD-AW’. The goal is to make the dataset usable and extendible by others. We introduce a framework of ADL recognition by making various pre-processing steps based on statistical and physical features which we call AMED. These features are then classified using an LSTM recurrent network. The proposed approach is compared to a random-forest algorithm. Finally, our experiments show that the joint use of all four sensors has achieved the best prediction accuracy reaching 95.3% for all activities. It also achieves savings from 88% to 98% in the training and testing time; compared to the random forest classif
Loading