Transformers as Multi-task Learners: Decoupling Features in Hidden Markov Models

ICLR 2026 Conference Submission16558 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Transformer, expressiveness power, hidden Markov model
Abstract: Transformer-based models have shown remarkable capabilities in sequence learning across a wide range of tasks, often performing well on specific task by leveraging input-output examples. Understanding the mechanisms by which these models capture and transfer information is important for driving model understanding progress, as well as guiding the design of more effective and efficient algorithms. However, despite their empirical success, a comprehensive theoretical understanding on it remains limited. In this work, we investigate the layerwise behavior of Transformers to uncover the mechanisms underlying their multi-task generalization ability. Taking explorations on a typical sequence model—Hidden Markov Models (HMMs), which are fundamental to many language tasks, we observe that: (i) lower layers of Transformers focus on extracting feature representations, primarily influenced by neighboring tokens; (ii) on the upper layers, features become decoupled, exhibiting a high degree of time disentanglement. Building on these empirical insights, we provide theoretical analysis for the expressiveness power of Transformers. Our explicit constructions align closely with empirical observations, providing theoretical support for the Transformer’s effectiveness and efficiency on sequence learning across diverse tasks.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 16558
Loading