Abstract: Human activity recognition (HAR) based on wearable devices has received significant attention from scholars in recent years. Nevertheless, the lack of effective exploitation of multiview learning and limited capacity for uncertainty analysis still remain major challenges for high-precision and high-confidence activity recognition. Thus, this article proposes a novel multiview uncertainty-aware graph convolutional network (MVUAGCN) model. Specifically, MVUAGCN first divides the raw time series data into multiview data according to the sensor type, and then structures the derived data into multiview graph topology. After that, the multiview residual graph convolutional networks with the Chebyshev polynomial are deployed to generate the sources of evidence (SoEs). Then, all involved multiview SoEs are mapped into the evidence space through the Dirichlet distribution to obtain the uncertainty degree in MVUAGCN. Finally, all the mapped SoEs are fused sequentially and the decision is made according to the maximum probability. The comprehensive experimental evaluations were conducted on four publicly HAR datasets. With the nearly 5% improvement compared to CNN-based approaches, MVUAGCN achieves 99.06%, 100%, 97.84%, and 98.25% recognition accuracy for all the four datasets: PAMAP2, MHEALTH, OPPORTUNITY, and UCI HAR, respectively.
Loading