OmniReflect: Discovering Transferable Constitutions for LLM agents via Neuro-Symbolic Reflections

ACL ARR 2025 May Submission4683 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Efforts to improve LLM agent performance on complex tasks have largely focused on fine-tuning and iterative self-correction. However, these approaches often lack generalizable mechanisms for long-term learning and remain inefficient in dynamic environments. We introduce OmniReflect, a hierarchical reflection-driven framework that constructs constitutions, compact sets of guiding principles distilled from past task experiences, to enhance the effectiveness and efficiency of LLM agents. OmniReflect operates in two modes: Self-sustaining, where a single agent periodically curates its own reflections during task execution, and Co-operative, where a meta-advisor derives constitutions from a small calibration set to guide another agent. To construct these constitutional principles, we employ Neural, Symbolic, and Neuro-Symbolic techniques, offering a balance between contextual adaptability and computational efficiency. Empirical results averaged across models show major improvements in task success, with absolute gains of +10.3\% on ALFWorld, +23.8\% on BabyAI, and +8.3\% on PDDL in the Self-sustaining mode. Similar gains are seen in the Co-operative mode, where a lightweight Qwen3-4B ReAct agent outperforms all Reflexion baselines on BabyAI. These findings highlight the robustness and effectiveness of OmniReflect across environments and backbones.
Paper Type: Long
Research Area: Dialogue and Interactive Systems
Research Area Keywords: Self-reflection, reflection, neuro-symbolic, LLM agent, abstraction, task-oriented
Languages Studied: English
Submission Number: 4683
Loading