Interpretable 3D Human Action Analysis with Temporal Convolutional Networks

Tae Soo Kim, Austin Reiter

Apr 07, 2017 (modified: Apr 07, 2017) CVPR 2017 BNMW Submission readers: everyone
  • Paper length: 8 page
  • Abstract: The discriminative power of modern deep learning models for 3D human action recognition is growing ever so potent. In conjunction with the recent resurgence of 3D human action representation with 3D skeletons, the quality and the pace of recent progress have been significant. However, the inner workings of state-of-the-art learning based methods in 3D human action recognition still remain mostly black-box. In this work, we propose to use a new class of models known as Temporal Convolutional Neural Networks (TCN) for 3D human action recognition. Compared to popular LSTM-based Recurrent Neural Network models, given interpretable input such as 3D skeletons, TCN provides us a way to explicitly learn readily interpretable spatio-temporal representations for 3D human action recognition. We provide our strategy in re-designing the TCN with interpretability in mind and how such characteristics of the model is leveraged to construct a powerful 3D activity recognition method. Through this work, we wish to take a step towards a spatio-temporal model that is easier to understand, explain and interpret. The resulting model, Res-TCN, achieves state-of-the-art results on the largest 3D human action recognition dataset, NTU-RGBD.
  • TL;DR: A new approach CNN-based approach for 3D human action analysis which improves model interpretability and discriminative power.
  • Conflicts: cs.jhu.edu
  • Keywords: deep learning, convolutional neural networks, human action, activity recognition, interpret-ability, supervised learning, 3D vision

Loading