Machine Judges Reduce Sentencing Bias? A Computational Social Science Evaluation

Published: 13 Dec 2025, Last Modified: 16 Jan 2026AILaw26EveryoneRevisionsBibTeXCC BY-NC-SA 4.0
Keywords: Machine Learning; Sentencing Disparity; Individualized Biases; Treating Like Cases Alike
Paper Type: Full papers
TL;DR: if all judges are replaced by machine learning models, the probability of being sentenced an unfair result is 35% lower. Machine learning models originally has the ability to reduce sentencing biases.
Abstract: Machine learning models have been applied in many criminal justice decisions, and prior research has proved that machine learning models can reduce biases if they are blind. However, prior research focuses on classification tasks in criminal justice. Regression tasks' disparity is much more difficult to be evaluated. Prior research on sentencing bias evaluation only focuses on systematic biases and ignores the individualized biases in cases. In this study, we focus on the sentencing task. We propose a new method to evaluate whether an individual case is biased based on comparing it with all other cases according to **Treating Like Cases Alike**. We collect all 238,419 theft cases from CJO and extract the legal factors and sentencing results. 159,699 cases are used for building a machine learning model, and we test our model' ability of reducing biases on the rest 78,720 cases. We use XGBoost to train our model. By employing the method, we find **if all judges are replaced by machine learning models, the probability of being sentenced an unfair result is 35% lower**; if cooperating with judges, 55% biased cases can be sentenced in a more fair way. Machine learning models can reduce individualized biases.
Poster PDF: pdf
Submission Number: 2
Loading