A Human-Aligned System for Guiding React Agents through Adaptive Prompting and Dynamic Memory Editing
Abstract: This paper proposes a sustainable and adaptive prompting system for ReAct-based language model agents that enhances reasoning accuracy, contextual consistency, and alignment with human expectations in multi-step question answering. The system integrates task-adaptive evaluation, structured memory editing, and reactive reasoning cycles to enable iterative prompt refinement and context-aware adaptation. Unlike existing methods that treat prompts and memory as static, our approach dynamically updates both based on interaction feedback. Experiments across six QA domains show consistent improvements over strong baselines in LLM-as-judge and human evaluations, achieving up to 91.88% agreement with human judgment (Cohen’s Kappa). These results underscore the value of memory-aware prompting and reactive reasoning in developing reliable and adaptable LLM agents.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: LLM/AI agents, prompting, retrieval-augmented generation, robustness, applications
Contribution Types: Model analysis & interpretability
Languages Studied: English, Thai
Submission Number: 764
Loading