Keywords: human-centered AI, responsible AI, agency, autonomy, human empowerment, human-aligned AI
TL;DR: To properly center humans we require a focus on an outcome of human empowerment (or the best possible future for humans) in addition to our current focus on model, machine, and algorithm safety acceptance (or the least harmful future for humans).
Abstract: The paper is a preliminary attempt to offer insights for how we currently understand and navigate the challenges AI poses to human agency and autonomy and how our aim of governance requires adjusting as it is still not properly centering humans. To adjust our aim, we require a focus on an outcome of human empowerment or the best possible future for humans in addition to our current focus on model, machine, and algorithm safety acceptance or the least harmful future for humans. This AI Governance Workshop paper argues that authentic human-centeredness in the research, design, development, and deployment (RDDD) of AI can compliment human-compatibility and/or human-values-alignment constructs and can add the friction necessary to 1) foster human empowerment, 2) reverse heteronomy, and 3) avoid a future where the achievement of a fully embodied AI system simultaneously influences the disembodiment of humans. The paper proposes a structure and diagnostic framework for defining human empowerment. To personify this framework, a generalized machine learning lifecycle is expressed alongside notions of reliance on, choices surrounding, and access to AI.
Submission Number: 6
Loading