Operationalizing Responsible AI Policies with LLMs: an End-to-End Monitoring Prototype

Published: 13 Apr 2026, Last Modified: 04 May 2026CHI Late-Breaking Work 2026EveryoneCC BY 4.0
Abstract: As AI governance requirements continue to emerge, policy experts and engineers are increasingly responsible for demonstrating that AI systems comply with ethical and regulatory expectations. In practice, this work involves interpreting high-level, frequently ambiguous policy language and translating it into concrete, testable compliance practices. These processes are time-consuming and difficult to scale as regulations and systems evolve. We present the Responsible AI Monitoring Platform (RAMP), a human-centered system that supports experts in making AI governance work more systematic and transparent. RAMP extracts policy statements from governance documents, decomposes them into atomic obligations, and proposes system-specific rules linked to available evaluations, while surfacing gaps where requirements remain unverifiable or underspecified. Human experts remain the final decision-makers, ensuring that extracted policies are reviewed before downstream use. In a pilot focused on conversational systems, RAMP provides interpretable compliance evidence and decision-oriented summaries through an interactive dashboard, supporting traceability in responsible AI workflows.
Loading