Detection of Local and Global Separability in Blurry Models as a Method for Explainable AI

Published: 15 Mar 2026, Last Modified: 15 Mar 20262026 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: blurry model, pairwise independent submodels, globally separable model, locally separable model, globally entangled model, locally entangled model
Abstract: The rapid advancement of deep learning and the deployment of intelligent systems in critical domains (medicine, finance, law) have brought the issue of decision interpretability to the forefront. Despite their high performance, "black box" architectures create a trust barrier for users due to the opacity of their internal structure. Existing post-hoc analysis methods (such as Grad-CAM, SHAP, and LIME) provide only local interpretation, failing to offer a global explanation of the system's logic. Structural decomposition of complex systems into independent subsystems is considered a promising approach to addressing XAI challenges, as it enhances model controllability, verifiability, and safety. In this paper, this approach is implemented within the framework of blurry model theory — a methodology for logical formalization under conditions of incomplete and imprecise knowledge. The authors extend the classical concept of a submodel to the class of blurry structures and introduce a mutual independence criterion: submodels are considered independent if the events they describe are stochastically independent. Based on this, the property of separability, i.e. the capacity of a blurry model to be decomposed into independent components is formalized. Criteria for local and global separability are investigated. A key result of the work is the proof of a theorem on the uniqueness of the "normal" (minimal) decomposition of a model.
Submission Number: 26
Loading