ShopSimulator: Evaluating and Exploring RL-Driven LLM Agent for Shopping Assistants

ACL ARR 2026 January Submission7194 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: agent, evaluation, e‑commerce, environment
Abstract: Large language model (LLM)‑based agents are increasingly deployed in e‑commerce shopping. To perform thorough, user‑tailored product searches, agents should interpret personal preferences, engage in multi‑turn dialogues, and ultimately retrieve and discriminate among highly similar products. However, existing research has yet to provide a unified simulation environment that consistently captures all of these aspects, and always focuses solely on evaluation benchmarks without training support. In this paper, we introduce ShopSimulator, a large‑scale and challenging Chinese shopping environment. Leveraging ShopSimulator, we evaluate LLMs across diverse scenarios, finding that even the best‑performing models achieve less than 40\% full‑success rate. Error analysis reveals that agents struggle with deep search and product selection in long trajectories, fail to balance the use of personalization cues, and to effectively engage with users. Further training exploration provides practical guidance for overcoming these weaknesses, with the combination of supervised fine‑tuning (SFT) and reinforcement learning (RL) yielding significant performance improvements.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: benchmarking, evaluation, applications, agent evaluation
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data resources
Languages Studied: Chinese
Submission Number: 7194
Loading