Abstract: Medical Q&A tasks aim to address healthcare questions by analyzing medical data, playing a critical role in supporting patient care and informed decision-making. Recent research on LLMs in medical Q&A tasks underscores their potential to tackle complex medical questions through advanced reasoning and the integration of external knowledge. However, these studies predominantly emphasize complex questions, often overlooking simpler ones, which leads to increased computational overhead and higher risks of inaccuracies. To overcome these challenges, we propose a multi-agent framework that differentiates between complex and simple questions by leveraging confidence scores generated by distinct agents. The framework dynamically generates domain-specific agents that assess question difficulty based on confidence scores, allowing for the application of tailored processing strategies. Ultimately, agents converge on a final answer through a voting mechanism. Our approach demonstrates significant improvements across eight datasets, highlighting the effectiveness of the proposed framework. Our code can be found at https://github.com/w1031343245/MedConMA.git.
Loading