Necessity of Processing Sensitive Data for Bias Detection and Monitoring: A Techno-Legal Exploration
Keywords: AI Act, EU Data Protection Law, Necessity Principle, Bias Detection, Bias Monitoring, Fair ML
TL;DR: This paper explores the intersection of upcoming AI regulation and Fair Machine Learning research, specifically examining the necessity principle in the context of processing sensitive personal data for bias detection and monitoring in AI Systems.
Abstract: This paper explores the intersection of the upcoming AI Regulation and fair ML research, specifically examining the legal principle of "necessity" in the context of processing sensitive personal data for bias detection and monitoring in AI systems. Drawing upon Article 10 (5) of the AI Act, currently under negotiation, and the General Data Protection Regulation, we investigate the challenges posed by the nuanced concept of "necessity" in enabling AI providers to process sensitive personal data for bias detection and bias monitoring. The lack of guidance regarding this binding textual requirement creates significant legal uncertainty for all parties involved and risks a purposeful and inconsistent legal application. To address this issue from a techno-legal perspective, we delve into the core of the necessity principle and map it to current approaches in fair machine learning. Our objective is to bridge operational gaps between the forthcoming AI Act and the evolving field of fair ML and support an integrative approach of non-discrimination and data protection desiderata in the conception of fair ML, thereby facilitating regulatory compliance.
Submission Number: 5
Loading