Boundary–Programmable Prompting: Modular Design Patterns for Multi-Turn LLM Agents

ACL ARR 2026 January Submission2436 Authors

03 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Modular Prompting, Interpretability, Socratic tutoring
Abstract: Large language model agents are often governed by single-block prompts that entangle goals, heuristics, and memory, making their behavior brittle, opaque, and hard to control. We introduce \textbf{Boundary–Programmable Prompting (BPP)}, a modular prompting architecture that separates (i) boundary prompts encoding role and interpretive constraints, (ii) programmable heuristics that use \emph{fuzzy heuristic ranges} to modulate behavior symbolically, and (iii) a short-term memory schema for tracking interaction-level state. Unlike prompting methods that rely on natural language alone, BPP provides machine-readable ranges that LLMs use to select strategies dynamically during inference. This structure supports more interpretable adaptation and bounds the space of valid actions to reduce hallucination. In a Socratic tutoring testbed, ablating each BPP module leads to selective, interpretable failures (e.g., loss of coherence without memory, rigidity without programmable heuristics, role drift without boundaries), demonstrating that modular prompting can serve as an inference-time control architecture.
Paper Type: Short
Research Area: Human-AI Interaction/Cooperation and Human-Centric NLP
Research Area Keywords: Human-Centered NLP, Language Modeling
Contribution Types: Position papers
Languages Studied: English
Submission Number: 2436
Loading