Instructions shape Production of Language, not Processing

01 May 2026 (modified: 03 May 2026)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Instructions trigger a production-centered mechanism in language models. Through a cognitively inspired lens that separates language processing and production, we reveal this mechanism as an asymmetry between the two stages by probing task-specific information layer-wise across five binary judgment tasks. Specifically, we measure how instruction tokens shape information both when sample tokens — the input under evaluation — are processed and when output tokens are produced. Across prompting variations, task-specific information in sample tokens stays largely stable and correlates only weakly with behavior, whereas the same information in output tokens varies substantially and correlates strongly. Attention-based interventions confirm this pattern causally: blocking instruction flow to all subsequent tokens reduces both behavior and information in output tokens, whereas blocking it only to sample tokens has minimal effect on either. The asymmetry generalizes across model families and tasks, and sharpens with model scale and instruction-tuning — both of which disproportionately affect the production stage. Our findings suggest that understanding model capabilities requires both jointly assessing internals and behavior, and decomposing the internal perspective by token position to separate the processing of input tokens from the production of output tokens.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Sarath_Chandar1
Submission Number: 8711
Loading