Keywords: lifelong self-evolving, task-oriented dialogue, multi-agent, evolutionary computation
Abstract: Traditional task-oriented dialog systems are unable to evolve from ongoing interactions or adapt to new domains after deployment, that is a critical limitation in real-world dynamic environments. Continual learning approaches depend on episodic retraining with human curated data, failing to achieve autonomy lifelong improvement. While evolutionary computation and LLM driven self improvement offer promising mechanisms for dialog optimization, they lack a unified framework for holistic, iterative strategy refinement. To bridge this gap, we propose DarwinTOD, a lifelong self evolving dialog framework that systematically integrates these two paradigms, enabling continuous strategy optimization from a zero-shot base without task specific fine-tuning. DarwinTOD maintains an Evolvable Strategy Bank and operates through a dual-loop process: online multi-agent dialog execution with peer critique, and offline structured evolutionary operations that refine the strategy bank using accumulated feedback. This closed-loop design enables autonomous continuous improvement without human intervention. Extensive experiments show that DarwinTOD surpasses previous state-of-the-art methods and exhibits continuous performance gains throughout evolution. Our work provides a novel framework for building dialog systems with lifelong self evolution capabilities. The code is available at \href{https://anonymous.4open.science/r/DarwinTOD-BBD1}{Anonymous GitHub}.
Paper Type: Long
Research Area: AI/LLM Agents
Research Area Keywords: Dialogue and Interactive Systems, Machine Learning for NLP, NLP Applications
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 2851
Loading