EvoTac: A Self-Evolving LLM Agent for Eliciting Reusable Tacit Negotiation Heuristics from Terminal Outcomes

Published: 02 Mar 2026, Last Modified: 10 Apr 2026LLA 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM agents, self-evolving agents, tacit knowledge, negotiation, terminal outcome learning, continual learning, memory-augmented LLMs, opponent modeling, multi-agent systems, AI for strategic decision-making
TL;DR: EvoTac is a self-evolving LLM agent that extracts reusable tacit negotiation heuristics from terminal outcomes via layered memory and reflection, improving real-world negotiation performance.
Abstract: We propose EvoTac, an LLM-based framework for real-world negotiation that converts sparse terminal outcomes into reusable tacit experience without fine-tuning the base model. It continuously adapts to changing opponents and scenarios through a simple predict–reflect–update loop, using decoupled layered memory to represent the agent’s constraints, observed opponent behavior patterns, and persistent hypotheses about opponent stance and type. Experiments on a real-world online marketing negotiation task (predicting final commission rates) show that EvoTac outperforms traditional models and multiple LLM baselines in prediction accuracy and first-round offer hit rate.
Submission Number: 24
Loading