An Evaluation of Cost Functions for Algorithmic Recourse

Published: 03 Feb 2026, Last Modified: 03 Feb 2026AISTATS 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We conduct the first large-scale empirical evaluation of cost functions in algorithmic recourse, discovering that using language models to scale up the Bradley-Terry model works best.
Abstract: Algorithmic recourse is a field concerned with offering actionable recommendations to individuals who have received adverse outcomes from automated systems. Most recourse algorithms assume access to a cost function, which quantifies the effort involved in following these suggestions. However, to date, there has been no serious benchmarking of these functions both from a computational and human perspective. In this paper, we propose four metrics to evaluate whether currently popular cost functions in recourse satisfy the minimal requirements for meaningful distance calculations. In addition, we also propose extensions to current approaches using large-language models (LLMs) as surrogate human labellers, which are prompted with a cost-based desiderata. Experiments revealed that methods focused on the Bradley-Terry model perform best, but only when scaled up with our proposed extensions, which would be the recommended choice in practice. We expect our insights to help practitioners in training and designing appropriate cost functions in the future.
Submission Number: 1270
Loading