Causal Explanation-Guided Learning for Organ Allocation

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Causal Inference, Explanation Supervision, Organ Allocation
TL;DR: We develop a causal acceptance model for organ offers that learns from directional refusal reasons. Our method, ClexNet, leverages these explanations to guide learning and improve generalization beyond observed allocation policies.
Abstract: A central challenge in organ transplantation is the extremely low acceptance rate of donor organ offers—typically in the single digits—leading to high discard rates and suboptimal use of available grafts. Current acceptance models embedded in allocation systems are non-causal, trained on observational data, and fail to generalize to policy-relevant counterfactuals. This limits their reliability for both policy evaluation and simulator-based optimization. In this work, we reframe organ offer acceptance as a counterfactual prediction problem and propose a method to learn from routinely recorded—but often overlooked—refusal explanations. These refusal reasons act as direction-only counterfactual signals: for example, a refusal reason such as "old donor age" implies acceptance might have occurred had the donor been younger. We formalize this setting and introduce ClexNet, a novel causal model that learns policy-invariant representations via balanced training and an explanation-guided augmentation loss. On both synthetic and semi-synthetic data, ClexNet outperforms existing acceptance models in predictive performance, generalization, and calibration, offering a robust drop-in improvement for simulators and allocation policy evaluation. Beyond transplantation, our approach provides a general method for incorporating human direction-only explanations as a form of model supervision, improving performance in settings where only observational data is available.
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 23805
Loading