Beyond Single-Attribute Fairness: A Cross-Jurisdictional Intersectional Audit of Criminal Justice Risk Assessment Systems

Published: 13 Dec 2025, Last Modified: 16 Jan 2026AILaw26EveryoneRevisionsBibTeXCC BY-NC-SA 4.0
Keywords: Algorithmic Fairness, Intersectionality, Criminal Justice, Algorithmic Auditing, Risk Assessment
Paper Type: Full papers
TL;DR: We prove single-attribute audits underestimate criminal justice bias by 7.6x. Our study and open-source toolkit reduce violations by 60%, establishing intersectional auditing as a mandatory legal standard for trustworthy AI.
Abstract: Criminal justice risk assessment systems deployed across multiple jurisdictions exhibit systematic algorithmic bias, yet existing fairness audits analyze demographic attributes in isolation—failing to capture the compounding discrimination experienced by individuals at demographic intersections. We present the first comprehensive cross-jurisdictional intersectional fairness audit, analyzing 7,214 defendants from COMPAS (US/FL) with validation across NIJ Recidivism Challenge (US/GA), Wisconsin Circuit Court Database (US/WI), and CJEU Equality Law cases (EU)—covering 104 distinct demographic intersections across four legal systems. Our analysis reveals that single-attribute audits systematically underestimate bias by 7.6×: while race-only analysis shows maximum 7.0% disparity range, intersectional analysis uncovers 53.3% worst-case gaps (p¡0.001). Cross-jurisdictional validation demonstrates this is structural: all four systems exhibit severe violations, with 50-100% of intersectional groups violating the legal 4/5 rule. We provide practical debiasing achieving 60% violation reduction at 0.36% accuracy cost, alongside an open-source toolkit outperforming existing solutions. This work establishes intersectional auditing as mandatory for trustworthy AI in high-stakes domains.
Poster PDF: pdf
Submission Number: 39
Loading