Keywords: Multi-Agent Debate Strategies, Medical Q&A, Generative LLMs
TL;DR: We benchmark multi-agent debate strategies for medical Q&A, and develop a novel debate prompting strategy based on agent agreement that outperforms previously published strategies.
Abstract: Recent advancements in large language models (LLMs) underscore their potential for responding to medical inquiries. However, ensuring that generative agents provide accurate and reliable answers remains an ongoing challenge. In this context, multi-agent debate (MAD) has emerged as a prominent strategy for enhancing the truthfulness of LLMs. In this work, we provide a comprehensive benchmark of MAD strategies for medical Q&A, along with open-source implementations. This sheds light on the effective utilization of various strategies including the trade-offs between cost, time, and accuracy. We build upon these insights to provide a novel debate-prompting strategy based on agent agreement that outperforms previously published strategies on medical Q&A tasks.
Submission Number: 36
Loading