The Dual Risks and Prevention Paths of Digital Equality Protection from the Perspective of Human-AI Interaction

Published: 13 Dec 2025, Last Modified: 16 Jan 2026AILaw26EveryoneRevisionsBibTeXCC BY-NC-SA 4.0
Keywords: AI alignment; human-machine interaction; equali-ty-value; risk prevention
Paper Type: Full papers
TL;DR: The present paper aims to illuminate new changes in the protection of equal rights in the digital age.
Abstract: The Value Alignment between human and AI is a crucial pathway to prevent ethical issues in AI, where misalignment of equal values will lead to significant risks. AI agent enters into the value alignment of equality which happened as human-to-human in the past and it has made the existing inequality problem present three new characteristics: the individualization of opinion leaders, the embeddedness of group discrimination, and the dynamic of weak position. Simultaneously, the tension between human-AI interaction brings about new inequality risks under three forms as follow: “AI used by human”, “human used by AI”, and “human cannot use AI”. To guard against these risks, firstly, both the review of subjects in the technology center and the protection of subjects in the periphery should be strengthened, based on the sense of community. And from the perspective of “lex digitalis”, the legal interpretation can give the right of equality a digital connotation and the normative review mechanism can improve anti-discrimination reviewing. Besides, it’s feasible to consolidate the principle of “leniency entry, rigor exit” for technical review, coping with the problem between the uncertainty of technology and value conflict. Through three dimensions above, the equality-value review mechanism could be con-structed. This mechanism aims to improve the safety and trustworthiness of AI, and additionally grasp opportunities of forging equality-value consensus in risk society.
Poster PDF: pdf
Submission Number: 8
Loading