Developing Guidelines for Human-LLM Agent Teams: A Multi-Stakeholder Lens
Keywords: human-LLM agent teams, AI agents, generative AI, hybrid intelligence, guidelines
TL;DR: We develop 24 multi-stakeholder guidelines for the principled design of human-LLM agent teams through a three-step iterative process.
Abstract: Agents based on Large Language Models (LLM agents) have the potential to work with humans as part of a team to achieve specific goals. The natural language interface of LLM agents and their high level of autonomy enables more seamless collaborations than previous technologies, allowing them to carry out tasks autonomously and engage in conversations with humans, e.g., to clarify goals, request authorizations or double-check decisions. However, the current literature lacks systematic design guidelines for these human-LLM agent teams. This gap might foster misunderstandings, misuse of autonomy, and lack of common ground, potentially leading to collaboration pitfalls. To mitigate these risks, we develop 24 guidelines for the principled design of human-LLM teams. We adopt a multi stakeholder approach and propose guidelines for LLM agents, human team members, team designers and embedding organizations. To develop these guidelines, we distill design recommendation from an exploratory workshop with 15 experts on human-AI teaming and from a literature review of 93 empirical papers in human-LLM collaboration. Drawing from literature on human teams, we conceptually categorize the recommendations across different stages of the teaming process. A user study with 10 additional experts suggests the guidelines can help prevent collaboration pitfalls in human-LLM agent teams within workplace settings.
Area: Human-Agent Interaction (HAI)
Generative A I: I acknowledge that I have read and will follow this policy.
Submission Number: 1023
Loading