EdgeActNet: Edge Intelligence-Enabled Human Activity Recognition Using Radar Point Cloud

Published: 01 Jan 2024, Last Modified: 07 Aug 2024IEEE Trans. Mob. Comput. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Human activity recognition (HAR) has become a research hotspot because of its wide range of application prospects. It has higher requirements for real-time and power-efficient processing. However, a large amount of data transfer between sensors and servers, and computation-intensive recognition models hinder the implementation of real-time HAR systems. Recently, edge computing has been proposed to address this challenge by moving computational and data storage resources to the sensors, rather than depending on a centralized server/cloud. In this paper, we investigated binary neural networks for edge intelligence-enabled HAR using radar point cloud. Point cloud can provide 3-dimensional spatial information, which is helpful to improve recognition accuracy. Time-series point cloud also brings challenges, such as larger data volume, 4-dimensional data processing, and more intensive computation. To tackle these challenges, we adopt the 2-dimensional histograms for point cloud multi-view processing and propose the EdgeActNet, a binary neural network for point cloud-based human activity classification on edge devices. In the evaluation, the EdgeActNet achieved the best results with average accuracies of 97.63% on the MMActivity dataset and 95.03% on the point cloud samples of the DGUHA dataset respectively; and saved $16.9\times$ memory consumption and $11.5\times$ inference time compared to its full-precision version. Our work also is the first to apply 2D histogram-based multi-view representation and BNNs for time-series point cloud classification.
Loading