TRIAGE: Ethical Benchmarking of AI Models Through Mass Casualty Simulations

Published: 12 Oct 2024, Last Modified: 11 Nov 2024GenAI4Health PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: machine ethics, TRIAGE benchmark, ethical decision-making, large language models, mass casualty incidents, medical triage, ethical dilemmas, prompting, adversarial prompting, jailbreak, AI in healthcare, decision support, triage scenarios, benchmarking
TL;DR: The TRIAGE Benchmark evaluates the ability of large language models (LLMs) to make ethical decisions in mass casualty scenarios, using real-world dilemmas designed by medical professionals.
Abstract: We present the TRIAGE Benchmark, a novel machine ethics (ME) benchmark that tests LLMs' ability to make ethical decisions during mass casualty incidents. It uses real-world ethical dilemmas with clear solutions designed by medical professionals, offering a more realistic alternative to annotation-based benchmarks. TRIAGE incorporates various prompting styles to evaluate model performance across different contexts. Most models consistently outperformed random guessing, suggesting LLMs may support decision-making in triage scenarios. Neutral or factual scenario formulations led to the best performance, unlike other ME benchmarks where ethical reminders improved outcomes. Adversarial prompts reduced performance but not to random guessing levels. Open-source models made more morally serious errors, and general capability overall predicted better performance.
Submission Number: 33
Loading