MARTA: Machine Learning Auditing for Robust and Transparent (Public) Administration

Published: 13 Dec 2025, Last Modified: 16 Jan 2026AILaw26EveryoneRevisionsBibTeXCC BY-NC-SA 4.0
Keywords: Public Sector AI, Trustworthiness, Auditing, Accountability, Compliance, Governance
Paper Type: Full papers
TL;DR: MARTA: A concise auditing framework translating legal requirements into technical evaluation metrics, applied to a real-world public-sector AI system.
Abstract: Artificial intelligence (AI) systems in public administration raise complex challenges for fairness, transparency, and legal accountability. This paper presents Machine Learning Auditing for Robust and Transparent Administration (MARTA), a practical methodology applied to a Smart e-Desk use case being developed for the Portuguese Tax Authority. MARTA proposes a multidisciplinary auditing framework combining technical, legal, and human-centered perspectives across five dimensions: bias, robustness, privacy, transparency, and human oversight. Using methods such as counterfactual bias inference, adversarial robustness evaluation, and privacy-risk analysis, the framework aligns with the HUDERIA methodology of the Council of Europe and extends it through deeper technical evaluation and regulatory mapping to the EU AI Act and GDPR. Results showcased a solid framework with minimal gender imbalance and robust to the most common data variations. While already integrating privacy-preserving and explainability methods, tests suggest partial sufficiency; thus, further refinement is advised. The study contributes a practical model for lawful and trustworthy AI auditing in the public sector. It demonstrates how rights-based principles can be translated into measurable audit procedures and actionable governance measures.
Poster PDF: pdf
Submission Number: 35
Loading