Multi-expert Prompting Improves Reliability, Safety and Usefulness of Large Language Models

ACL ARR 2024 June Submission4237 Authors

16 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: We present Multi-expert Prompting, an enhanced extension of ExpertPrompting (Xu et al., 2023), which efficiently guides a large language model (LLM) to fulfill an input instruction by simulating multiple expert behaviors. Multi-expert Prompting synthesizes and evaluates responses from these experts, selecting the best among individual and combined responses in a coherent chain of thoughts through our seven carefully designed subtasks based on the Nominal Group Technique (Ven and Delbecq, 1974). It is the pioneer in addressing the challenge of aggregating long-form answers from LLM expert agents within a single turn. Our evaluations demonstrate that Multi-expert Prompting significantly outperforms ExpertPrompting and comparable baselines in enhancing the truthfulness, factuality, informativeness, and usefulness of responses while reducing toxicity and hurtfulness. It further achieves state-of-the-art truthfulness by outperforming the best baseline by 8.69% with ChatGPT. Moreover, it is efficient, explainable, and highly adaptable to diverse scenarios, eliminating the need for manual prompt construction.
Paper Type: Long
Research Area: Special Theme (conference specific)
Research Area Keywords: Large Language Models, Efficient Inference, Multi-agent Systems
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches to low-resource settings, Approaches low compute settings-efficiency
Languages Studied: English
Submission Number: 4237
Loading