Keywords: responsible ai, fairness, generative AI, regulation, governance, testbeds, explainability
TL;DR: Responsible GenAI
Abstract: Generative AI (GenAI) has rapidly expanded into domains such as healthcare, finance, education, and media, raising acute concerns around fairness, transparency, accountability, and governance. While prior Responsible AI (RAI) surveys have addressed bias mitigation, privacy, and ethical design, they largely focus on traditional AI and overlook the distinctive risks of GenAI, including hallucinations, stochastic outputs, intellectual property disputes, and large-scale synthetic content generation. This survey addresses that gap by systematically reviewing more than 80 studies published between 2022 and 2024 to examine Responsible Generative AI through both technical and regulatory perspectives. We identify five core problem areas: data-related risks, model-related risks, challenges with regulation, the limited scope of existing benchmarks, and poor explainability. In response, we highlight emerging solutions across five domains: establishing clear principles, adopting governance frameworks, defining measurable metrics, validating through AI-ready testbeds, and enabling adaptive oversight via regulatory sandboxes. By mapping these problem and solution spaces, this study contributes an integrated framework for Responsible Generative AI, providing actionable insights for researchers, practitioners, and policymakers seeking to align innovation with ethical, societal, and legal expectations.
Track: Track 2: ML by Muslim Authors
Submission Number: 28
Loading