Usage Governance Advisor: From Intent to AI Governance

Published: 02 Jan 2025, Last Modified: 03 Mar 2025AAAI 2025 Workshop AIGOV OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: AI governance, Risk Assessment, Knowledge Graph
TL;DR: The Usage Governance Advisor evaluates AI safety by using a Knowledge Graph to organize risks, prioritize them by use case, recommend benchmarks and assessments, and suggest mitigation strategies, ensuring informed, responsible AI deployment.
Abstract: Evaluating the safety of AI Systems is a pressing concern for organizations deploying them. In addition to the societal damage done by the lack of fairness of those systems, deployers, e.g. companies, are concerned about the legal repercussions and the reputational damage incurred by the use of models that are unsafe. Safety covers both what a model does, e.g. can it be used to reveal personal information from its training set, and what a model is, was it only trained on licensed data sets. Responsible use is encouraged through mechanisms that advise and help the user to take mitigating actions where safety risks are detected. Determining the safety of an AI system requires gathering information from a wide set of heterogeneous sources including safety benchmarks and technical documentation for the set of models used in that system. We present Usage Governance Advisor which creates semi-structured governance information, identifies and prioritises risks according to the intended use case, recommends appropriate benchmarks and risk assessments and importantly recommends mitigation strategies and recommended actions. Our solution leverages a Knowledge Graph (KG) based approach that organizes the risk-related information about AI systems using an ontology. We describe how we populate this KG and leverage it for tooling in order to aid in decision making when assessing risks associated with an AI solution.
Submission Number: 5
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview