Multi-expert Prompting Improves Reliability, Safety and Usefulness of Large Language ModelsDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: We present Multi-expert Prompting, an enhanced extension of ExpertPrompting (Xu et al., 2023), which guides a large language model (LLM) to fulfill the input instruction as multiple experts, composes a combined response from experts’ responses, and selects the best among individual experts and combined responses. Our evaluations demonstrate Multi-expert Prompting surpasses ExpertPrompting and comparable baselines significantly in enhancing the truthfulness, factuality, informativeness, and usefulness, and reducing the toxicity and hurtfulness of LLMs, achieving state-of-the-art truthfulness. Moreover, it is highly adaptable to diverse scenarios, eliminating the need for manual prompt construction.
Paper Type: short
Research Area: Efficient/Low-Resource Methods for NLP
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview