Keywords: Trustworthy Artificial Intelligence, Data Annotation, Algorithmic Fairness, Data Traceability, Dual Governance
Paper Type: Full papers
Abstract: As the foundational "meta-labour" for training artificial intelligence models, data annotation processes exhibiting fairness deficits and bias implantation have become significant sources of algorithmic discrimination, directly impeding the realisation of trustworthy artificial intelligence. This study systematically analyses the generation mechanisms of algorithmic bias within data annotation, revealing governance dilemmas arising from dual pathways: cognitive embedding by agents and structural exclusion by data objects. By comparing legislative approaches to data annotation fairness governance across the EU, US, and China, it identifies theoretical blind spots in current regulations concerning the formalisation of value embedding. Building upon this, a dual governance framework of "rigid legal constraints coupled with flexible ethical guidance" is proposed. This framework outlines pathways for bias mitigation through dual dimensions of rule embedding and ethical review, offering a systematic solution to address the "inherent flaws" of algorithmic discrimination and achieve fairness in data annotation. It further drives the paradigm shift in AI governance from "algorithm explanation" towards "data traceability".
Poster PDF: pdf
Submission Number: 20
Loading