CLARA: A Hybrid LLM-Rule System for EU AI Act Risk Classification

Published: 13 Dec 2025, Last Modified: 16 Jan 2026AILaw26EveryoneRevisionsBibTeXCC BY-NC-SA 4.0
Keywords: Retrieval-Augmented Generation, Information Retrieval, Classification, Compliance, AI Act
Paper Type: Demo papers
TL;DR: This work presents CLARA, a hybrid system that combines LLMs with symbolic rule reasoning to support organizations self-assessing the regulatory risk tier of their AI systems according to the EU AI Act.
Abstract: Interpreting and operationalizing the European Union Artificial Intelligence Act poses a dual challenge: it demands both technical understanding of AI systems and legal interpretation of a complex regulatory framework. Manual risk classification is costly, inconsistent, and difficult to scale. This work presents CLARA, a hybrid system that combines large language models (LLMs) with symbolic rule reasoning to support organizations in the first step of self-assessing the regulatory risk tier of their AI systems. CLARA processes free-form textual descriptions of AI systems, retrieves relevant provisions and guidelines, and evaluates them through two complementary approaches: (1) an LLM-only semantic reasoning pipeline, and (2) a neurosymbolic pipeline where LLMs identify condition matches and a deterministic rule engine produces the final decision. We demonstrate CLARA through an interactive web interface that visualizes evidence retrieval, rule evaluation, and explainable classifications. Preliminary experiments using a set of example system descriptions constructed by a legal expert suggest that hybrid reasoning improves interpretability and robustness compared to purely generative approaches. The demo highlights how AI and Law can be effectively bridged through transparent, legally-grounded reasoning.
Poster PDF: pdf
Submission Number: 36
Loading