Keywords: AI Safety, Activation-Based Monitoring, Rule-Based Detection, Large Language Models, Misuse Detection
TL;DR: A Rule-Based Approach to Activation-Based AI Safety
Abstract: Large language models (LLMs) are increasingly paired with activation-based monitoring to detect and prevent harmful behaviors that may not be apparent at the surface-text level. However, existing activation safety approaches, trained on broad misuse datasets, struggle with poor precision, limited flexibility, and lack of interpretability. This paper introduces a new paradigm: rule-based activation safety, inspired by rule-sharing practices in cybersecurity. We propose modeling activations as cognitive elements (CEs), fine-grained, interpretable factors such as _''making a threat''_ and _payment processing_, that can be composed to capture nuanced, domain-specific behaviors with higher precision. Building on this representation, we present a practical framework that defines predicate rules over CEs and detects violations in real time. This enables practitioners to configure and update safeguards without retraining models or detectors, while supporting transparency and auditability. Our results show that compositional rule-based activation safety improves precision, supports domain customization, and lays the groundwork for scalable, interpretable, and auditable AI governance. We open source GAVEL and introduce GAVEL Studio, an interactive rule authoring and management tool. Code and datasets are available at [github.com/Offensive-AI-Lab/gavel](https://github.com/Offensive-AI-Lab/gavel)
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 5991
Loading