Plan-Aware Automated Context Engineering

Published: 03 Mar 2026, Last Modified: 26 Mar 2026NFAM 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Associative memory; context engineering; agent memory; long-horizon reasoning; plan-aware context compression; memory shaping; memory-augmented agents; attention efficiency; test-time memory optimization; agentic AI.
TL;DR: PAACE treats LLM agent context engineering as associative memory shaping, retaining or discarding context based on upcoming plan steps. Conditioning on future tasks stabilizes long-horizon reasoning while reducing context size and attention cost.
Abstract: Associative memory has re-emerged as a central abstraction for understanding attention, retrieval, and state evolution in modern AI systems, particularly in memory-augmented and agent-based models. In large language model (LLM) agents, memory is instantiated as an evolving prompt context that includes plans, intermediate reasoning, tool outputs, retrieved documents, and instructions. As agents execute long-horizon workflows, this memory state grows rapidly, becoming noisy, unstable, and increasingly difficult to reason over. We introduce PAACE (Plan-Aware Automated Context Engineering), a framework that formulates context management as a problem of associative memory shaping. PAACE learns to selectively retain, rewrite, compress, or discard memory elements based on their associative relevance to upcoming plan steps, effectively stabilizing task-relevant memory states while suppressing irrelevant or distracting information. Unlike query-aware or single-step compression methods, PAACE explicitly conditions memory transformations on the next-k tasks in an agent's plan, enabling multi-step associative retrieval and long-horizon reasoning. PAACE consists of two components: (1) PAACE-Syn, a scalable generator of synthetic agent workflows with explicit plan structure and stepwise memory supervision, and (2) PAACE-FT, a family of compact, distilled associative memory operators trained to imitate successful teacher-guided memory transformations. Experiments on OfficeBench and multi-objective question answering demonstrate that PAACE improves agent accuracy while substantially reducing memory load and cumulative attention cost. We show that learned associative memory shaping not only improves efficiency but also acts as a form of regularization, stabilizing reasoning over extended interactions.
Submission Number: 1
Loading