Track: Track 2: Socio-Economical and Future Visions
Keywords: digital gender divide, digital inequality, generative AI diffusion, algorithmic patriarchy, socio-technical systems, development economics, feminist political economy, AI adoption measurement, governance, digital inclusion
TL;DR: A socio-economic risk model showing how gender inequality and generative AI adoption can reinforce each other through feedback loops, plus a measurement agenda to detect and prevent widening gaps.
Abstract: As generative AI systems become pervasive tools for knowledge work, a central socioeconomic question is whether their diffusion reduces inequality through broad-based productivity gains or reproduces existing hierarchies through unequal participation and control. We introduce \emph{digital gender circularity} as a measurable risk model for the generative AI era: digital gender inequality and offline gender inequality can co-evolve in feedback loops, so unequal access and skills shape who benefits from AI adoption, while AI-mediated labor markets and information systems can in turn reshape offline gender outcomes.
We argue that the most policy-relevant inequality mechanisms may occur before downstream model audits, at the diffusion stage: who adopts, what uses are legitimized, and which institutions convert AI usage into rewards. This perspective reframes \emph{algorithmic patriarchy} as an institutional-diffusion phenomenon, not only a model-behavior problem. To make these dynamics testable, we propose a compact empirical agenda: (i) construct country-level indicators of generative AI diffusion (awareness, access constraints, intensity, and use-case mix), (ii) link diffusion parameters to established gender-development indices and pre-existing digital parity, and (iii) test whether earlier digital gender parity predicts faster convergence in AI usage and outcomes, versus divergence driven by skill, safety, and legitimacy barriers. We close with actionable measurement and governance implications for capability building, affordability, safety, and auditability so that productivity gains do not amplify entrenched inequality.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Presenter: ~Linda_Hong_Cheng2
Format: Yes, the presenting author will definitely attend in person because they attending ICLR for other complementary reasons.
Funding: Yes, the presenting author of this submission falls under ICLR’s funding aims, and funding would significantly impact their ability to attend the workshop in person.
Submission Number: 35
Loading