Structure-Aware in-Air Handwritten Text Recognition with Graph-Guided Cross-Modality Translator

Published: 01 Jan 2024, Last Modified: 02 Oct 2024ICASSP 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In-air handwriting as a new human-computer interaction way plays an important role in many virtual/mixed-reality applications. Existing methods for in-air handwritten text recognition (IAHTR) typically directly process handwriting trajectories with deep neural networks. However, those methods all simply learn discriminative patterns by modelling low-level relationships between adjacent points of trajectories, while completely ignoring the inherent geometric structures of characters. Instead, we propose a novel Graph-guided Cross-modality Translator for IAHTR, which further explicitly exploits the geometric structures of characters for guiding the decoding of trajectories via graph-guided cross-modality attention mechanism without introducing extra annotation costs. Experiments on benchmarks IAHEW-UCAS2016 & IAM-OnDB show that our method has achieved state-of-the-art performance for handwritten text recognition.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview