Keywords: machine learning, sparsity, interpretability, optimization, identifiability
TL;DR: We prove it is possible to identify an extremely sparse intermediate latent variable with only end-to-end supervision, and introduce Sparling, an extreme activation sparsity layer and optimization algorithm that can learn such a latent variable.
Abstract: Real-world processes often contain intermediate state that can be modeled as an extremely sparse activation tensor. In this work, we analyze the identifiability of such sparse and local latent intermediate variables, which we call motifs.
We prove our Motif Identifiability Theorem, stating that under certain assumptions it is possible to precisely identify these motifs exclusively by reducing end-to-end error. Additionally, we provide the Sparling algorithm, which uses a new kind of informational bottleneck that enforces levels of activation sparsity unachievable using other techniques. We find that extreme sparsity is necessary to achieve good intermediate state modeling empirically. On our synthetic DigitCircle domain as well as the LaTeXOCR and AudioMNISTSequence domains, we are able to precisely localize the intermediate states up to feature permutation with >90% accuracy, even though we only train end-to-end.
Primary Area: interpretability and explainable AI
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3980
Loading