Learning Medium-Sensitivity Functions: A Case Study on QR Code Decoding

17 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Input-Sensitivity, Transformer, QR Code, QR Code Decoding
TL;DR: Transformer learning ability of mid-sensitivity functions, particularly QR-code decoding, is examined.
Abstract: The hardness of learning a function that attains a target task relates to its input-sensitivity. For example, image classification tasks are input-insensitive as minor corruptions should not affect the classification results, whereas arithmetic and symbolic computation, the learning of which has been recently attracting interest, are highly input-sensitive as each input variable connects to the computation results. This study investigates the learning functions of medium sensitivity through learning-based Quick Response (QR) code decoding, which has both sensitivity to the change of plain texts and insensitivity to the bit flips. Our experiments reveal that Transformers can robustly decode QR codes, even beyond the theoretical error-correction limit, while remaining sensitive to single‑character changes in plain texts. We demonstrate that the robust decoding ability is derived from the regularity of natural language words. Transformers trained on English-based datasets learn to exploit it. Interestingly, this generalizes to words in different languages and to random alphabetical strings. To our knowledge, this study provides the first case study of learning medium-sensitivity functions and also suggests potential applications of learning-based QR code decoding that boost classical methods in combination.
Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 9270
Loading