Guidance Large Language Models at Test Time: A Unified Review of LLM-Training-Free Methods

ACL ARR 2026 January Submission2152 Authors

02 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Model, Test-Time Guidance, LLM-Training-Free
Abstract: Adapting Large Language Models (LLMs) to dynamic constraints typically requires expensive fine-tuning. While training-free test-time guidance offers a flexible alternative, the literature remains fragmented across isolated subfields. This paper presents a unified review of LLM-training-free guidance, systematizing methods that steer behavior without parameter updates. We propose a taxonomy based on the inference lifecycle, categorizing interventions into Input-Space, Latent-Space, Decoding-Space, and Output-Space guidance. Furthermore, we analyze critical trade-offs regarding model accessibility, computational cost, and control granularity. Finally, we discuss emerging frontiers, highlighting the convergence of control mechanisms towards unified architectures and the shift toward rigorous, interpretability-driven steering.
Paper Type: Long
Research Area: Language Models
Research Area Keywords: chain-of-thought, prompting, retrieval-augmented generation
Contribution Types: Surveys
Languages Studied: English
Submission Number: 2152
Loading