Local Vs. Global Interpretability: A Computational Perspective

22 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: general machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: interpretability, explainable AI
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: A framework for evaluating the interpretability of ML models at both local and global levels, when considering computational complexity.
Abstract: The local and global interpretability of various ML models has been studied extensively in recent years. Yet despite significant progress in the field, many of the known results are either informal or lack sufficient mathematical rigor. In this work, we propose a framework based on computational complexity theory to systematically evaluate the local and global interpretability of different ML models. In essence, our framework examines various forms of explanations that can be computed either locally or globally and assesses the computational complexity involved in generating them. We begin by rigorously studying global explanations, and establish: (1) a duality relationship between local and global forms of explanations; and (2) the inherent uniqueness associated with certain global forms of explanations. We then proceed to evaluate the computational complexity associated with these forms of explanations, with a particular emphasis on three model types usually positioned at the extremes of the interpretability spectrum: (1) linear models; (2) decision trees; and (3) neural networks. Our findings reveal that, assuming standard complexity assumptions such as P!=NP, computing global explanations is computationally more difficult for linear models than for their local counterparts. Surprisingly, this phenomenon is not universally applicable to decision trees and neural networks: in certain scenarios, computing a global explanation is actually more tractable than computing a local one. We consider these results as compelling evidence of the importance of analyzing ML explainability from a computational complexity perspective, as the means of gaining a deeper understanding of the inherent interpretability of diverse ML models.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5259
Loading