Keywords: Concept-based models, explainable AI, interpretability
TL;DR: We introduce the Sidechannel Independence Score (SIS) and SIS regularization, providing the first principled way to measure and control the accuracy–interpretability trade-off in concept-based sidechannel models.
Abstract: Concept Bottleneck Models (CBNMs) are deep learning models that provide interpretability by enforcing a bottleneck layer where predictions are based exclusively on human-understandable concepts. However, this constraint also restricts information flow and often results in reduced predictive accuracy. Concept Sidechannel Models (CSMs) address this limitation by introducing a sidechannel that bypasses the bottleneck and carry additional task-relevant information. While this improves accuracy, it simultaneously compromises interpretability, as predictions may rely on uninterpretable representations transmitted through sidechannels. Currently, there exists no principled technique to control this fundamental trade-off. In this paper, we close this gap. First, we present a unified probabilistic concept sidechannel meta-model that subsumes existing CSMs as special cases. Building on this framework, we introduce the Sidechannel Independence Score (SIS), a metric that quantifies a CSM’s reliance on its sidechannel by contrasting predictions made with and without sidechannel information. We further analyze how the expressivity of the predictor and the reliance of sidechannel jointly shape interpretability, revealing inherent trade-offs across different CSM architectures. Finally, we propose SIS regularization, which explicitly penalizes sidechannel reliance to improve interpretability. Empirical results show that state-of-the-art CSMs, when trained solely for accuracy, exhibit low representation interpretability, and that SIS regularization substantially improves their interpretability, intervenability, and the quality of learned interpretable task predictors. Our work provides both theoretical and practical tools for developing CSMs that balance accuracy and interpretability in a principled manner.
Primary Area: interpretability and explainable AI
Submission Number: 20645
Loading