Abstract: In-air handwriting as a new human-computer interaction way plays an important role in many virtual/mixed-reality applications. Existing methods for in-air handwritten text recognition (IAHTR) typically directly process handwriting trajectories with deep neural networks. However, those methods all simply learn discriminative patterns by modelling low-level relationships between adjacent points of trajectories, while completely ignoring the inherent geometric structures of characters. Instead, we propose a novel Graph-guided Cross-modality Translator for IAHTR, which further explicitly exploits the geometric structures of characters for guiding the decoding of trajectories via graph-guided cross-modality attention mechanism without introducing extra annotation costs. Experiments on benchmarks IAHEW-UCAS2016 & IAM-OnDB show that our method has achieved state-of-the-art performance for handwritten text recognition.
Loading