Abstract: Online gesture recognition is a challenging task in practical application scenarios since the gesture is not always directly in front of the camera. In order to solve the challenges caused by multiple viewpoints of skeleton data, in this paper, we proposed a novel view-invariant method for online skeleton gesture recognition. The whole skeleton sequence data as a point set in our method and a PCA-based view-invariant data preprocessing algorithm is proposed and applied in this paper. We can transform similar skeleton data to relatively stable viewpoints by applying the PCA algorithm according to the similarity of distribution features of the point set, which can ensures the viewpoint stability of our gesture recognition model. We conduct extensive experiments on the NTU RGB+D and Northwestern-UCLA benchmark datasets which contain multiple viewpoints and the results have demonstrated the effectiveness of the method proposed in this paper.
Loading