TextTeacher: What Can Language Teach About Images?

21 Jan 2026 (modified: 27 Apr 2026)Decision pending for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: The platonic representation hypothesis suggests that sufficiently large models converge to a shared representation geometry, even across modalities. Motivated by this, we ask: Can the semantic knowledge of a language model efficiently improve a vision model? As an answer, we introduce TextTeacher, a simple auxiliary objective that injects text embeddings as additional information into image classification training. TextTeacher uses readily available image captions, a pre-trained and frozen text encoder, and a lightweight projection to produce semantic anchors that guide efficiently representations during training while leaving the inference-time model unchanged. On ImageNet with standard ViT backbones, TextTeacher improves accuracy by up to $+2.7$ percentage points (p.p.) and yields consistent transfer gains (on average $+1.0$ p.p.) under the same recipe and compute. It outperforms vision knowledge distillation, yielding more accuracy at a constant compute budget or similar accuracy, but $33\%$ faster. Our analysis indicates that TextTeacher acts as a feature‑space preconditioner, shaping deeper layers in the first stages of training, and aiding generalization by supplying complementary semantic cues. TextTeacher adds negligible overhead, requires no costly multimodal pretraining and preserves the simplicity and latency of pure vision models. We release our code at \texttt{<URL upon acceptance>}.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Liang-Chieh_Chen1
Submission Number: 7086
Loading