Evaluating Human Trust in LLM-Based Planners: A Preliminary Study

11 Aug 2025 (modified: 27 Oct 2025)Submitted to NeurIPS Lock-LLM Workshop 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large language model, trust in AI systems, user study
Abstract: Large Language Models (LLMs) offer planning capabilities such as natural language explanations and iterative refinement, but human trust in these systems remains underexplored. We present a user study comparing trust in LLM-based and classical planners within a Planning Domain Definition Language (PDDL) domain. Using both subjective (trust ratings) and objective (evaluation accuracy) measures, we find that correctness is the primary driver of trust and performance. Explanations improve evaluation accuracy but have limited impact on trust, while plan refinement showed potential for increasing trust without significantly enhancing evaluation accuracy.
Submission Number: 2
Loading