Abstract: Highlights • An end-to-end subtitle detection and recognition system for East Asian languages is proposed and near-human-level recognition performance is achieved. • A novel image operator with the sequence information throughout the video is proposed to detect subtitle top/bottom boundary and single character width. • A CNN ensemble is leveraged to perform the classification of East Asian characters across huge dictionaries. The visualization of CNNs proves that different CNN models can capture distinctive features of characters. Abstract In this paper, we propose an innovative end-to-end subtitle detection and recognition system for videos in East Asian languages. Our end-to-end system consists of multiple stages. Subtitles are firstly detected by a novel image operator based on the sequence information of consecutive video frames. Then, an ensemble of Convolutional Neural Networks (CNNs) trained on synthetic data is adopted for detecting and recognizing East Asian characters. Finally, a dynamic programming approach leveraging language models is applied to constitute results of the entire body of text lines. The proposed system achieves average end-to-end accuracies of 98.2% and 98.3% on 40 videos in Simplified Chinese and 40 videos in Traditional Chinese respectively, which is a significant outperformance of other existing methods. The near-perfect accuracy of our system dramatically narrows the gap between human cognitive ability and state-of-the-art algorithms used for such a task.
0 Replies
Loading