Learning to Align: Addressing Character Frequency Distribution Shifts in Handwritten Text Recognition
Abstract: Handwritten text recognition aims to convert visual input into machine-readable text, and it remains challenging due to the evolving and context-dependent nature of handwriting. Character sets change over time, and character frequency distributions shift across historical periods or regions, often causing models trained on broad, heterogeneous corpora to underperform on specific subsets. To tackle this, we propose a novel loss function that incorporates the Wasserstein distance between the character frequency distribution of the predicted text and a target distribution empirically derived from training data. By penalizing divergence from expected distributions, our approach enhances both accuracy and robustness under temporal and contextual intra-dataset shifts. Furthermore, we demonstrate that character distribution alignment can also improve existing models at inference time without requiring retraining by integrating it as a scoring function in a guided decoding scheme. Experimental results across multiple datasets and architectures confirm the effectiveness of our method in boosting generalization and performance.
Paper Type: Long
Research Area: Machine Learning for NLP
Research Area Keywords: fine-tuning, multi-task learning, robustness, language change, inference methods
Contribution Types: NLP engineering experiment
Languages Studied: English, Greek, French
Submission Number: 7693
Loading