Applying Center Loss to Neural Networks for Sequence Prediction: A Study for Handwriting Recognition
Abstract: We propose a method to improve the overall accuracy of a neural network for predicting a sequence without using more training data nor adding more parameters. We apply a center loss at the sequence level as an auxiliary task. At every epoch we compute the center for each class, then we apply a center loss on each element of the sequence in order to reduce the intra-class distance. Center loss makes features more discriminative as well as compact in the feature space which increases the accuracy of the network and reduces overfitting. The network is trained jointly with the sequence prediction task and the center loss auxiliary task which increases the computation time only during training not in inference. We evaluate our method in a handwriting text recognition context on seven datasets. In addition to outperforming methods that do not use additional data for all datasets, our method achieves competitive results compared to those that do, with faster inference speed and fewer parameters. We also show that our method applied on a light neural network improves accuracy and is able to achieve competitive performance compared to deeper models. The advantage of using a light model is the processing speed needed for real applications. Code is available at https://github.com/simon-corbi/htr-ijcnn-2025.
External IDs:dblp:conf/ijcnn/CorbilleS25
Loading