P-LAM: Two-Step Reasoning for Proactive Large Action Models

ACL ARR 2026 January Submission4347 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Proactive Large Action Models, Proactive Action Models, Confidence-Aware Intervention, Task-Aligned Prompt Templates, Large Action Models (LAMs)
Abstract: Recent work has expanded the focus of Large Language Models (LLMs) to Large Action Models (LAMs), which generate executable actions for tasks such as coding, reasoning, and data analysis. However, existing LAMs typically remain reactive and cannot initiate actions without explicit user instructions, limiting their usefulness in dynamic settings. We introduce P-LAM, a two-step proactive reasoning framework that autonomously determines when intervention is required and what task to perform based on environmental observations. The first step evaluates the need for intervention and infers the task type, while the second applies a task-aligned prompt template to generate an appropriate action plan. Without any post-training, P-LAM achieves a 61.31\% success rate on the contamination-resistant LiveBench benchmark, substantially outperforming a conventional Chain-of-Thought approach by 26.31\% across six domains, including coding and writing. These results demonstrate that lightweight proactive reasoning can markedly improve LAM performance and reliability, even in zero-shot settings.
Paper Type: Long
Research Area: AI/LLM Agents
Research Area Keywords: Language Modeling, Dialogue and Interactive Systems
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 4347
Loading