MultiMedEval: A Benchmark and a Toolkit for Evaluating Medical Vision-Language Models

Published: 06 Jun 2024, Last Modified: 25 Jun 2024MIDL 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Vision-Language Models, Medical, Multimodal, Benchmark, Toolkit
Abstract: We introduce MultiMedEval, an open-source toolkit for fair and reproducible evaluation of large, medical vision-language models (VLM). MultiMedEval comprehensively assesses the models’ performance on a broad array of six multi-modal tasks, conducted over 23 datasets, and spanning over 11 medical domains. The chosen tasks and performance metrics are based on their widespread adoption in the community and their diversity, ensuring a thorough evaluation of the model’s overall generalizability. We open-source a Python toolkit (https://anonymous.4open.science/r/MultiMedEval-C780) with a simple interface and setup process, enabling the evaluation of any VLM in just a few lines of code. Our goal is to simplify the intricate landscape of VLM evaluation, thus promoting fair and uniform benchmarking of future VLMs.
Latex Code: zip
Copyright Form: pdf
Submission Number: 44
Loading