Abstract: We present Multi-expert Prompting, an enhanced extension of ExpertPrompting (Xu et al., 2023), which guides a large language model (LLM) to fulfill the input instruction as multiple experts, composes a combined response from experts’ responses, and selects the best among individual experts and combined responses. Our evaluations demonstrate Multi-expert Prompting surpasses ExpertPrompting and comparable baselines significantly in enhancing the truthfulness, factuality, informativeness, and usefulness, and reducing the toxicity and hurtfulness of LLMs, achieving state-of-the-art truthfulness. Moreover, it is highly adaptable to diverse scenarios, eliminating the need for manual prompt construction.
Paper Type: short
Research Area: Efficient/Low-Resource Methods for NLP
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
0 Replies
Loading