Automatically Finding and Validating Unexpected Side-Effects of Interventions on Language Models

ACL ARR 2026 January Submission7259 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: NLP, natural language processing, language models, interpretability, contrastive analysis, side effects, safety, alignment, scalable supervision, reasoning distillation, finetuning, knowledge editing, unlearning
Abstract: We present an automated, contrastive evaluation pipeline for auditing the behavioral impact of interventions on large language models. Given a base model $M_1$ and an intervention model $M_2$, our method compares their free-form, multi-token generations across aligned prompt contexts and produces human-readable, statistically validated natural-language hypotheses describing how the models differ, along with recurring themes that summarize patterns across validated hypotheses. We evaluate the approach in synthetic setting by injecting known behavioral changes and showing that the pipeline reliably recovers them. We then apply it to three real-world interventions, reasoning distillation, knowledge editing and unlearning, demonstrating that the method surfaces both intended and unexpected behavioral shifts, distinguishes large from subtle interventions, and does not hallucinate differences when effects are absent or misaligned with the prompt bank. Overall, the pipeline provides a statistically grounded and interpretable tool for post-hoc auditing of intervention-induced changes in model behavior.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: AI / LLM Agents, Generation, Interpretability and Analysis of Models for NLP, Language Modeling
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 7259
Loading