Keywords: Self-supervised learning, spiking neural networks, information theory, XAI
Abstract: Deep neural networks trained in an end-to-end manner have been proven to be efficient in a wide range of machine learning tasks. However, there is one drawback of end-to-end learning: The learned features and information are implicitly represented in neural network parameters, which are not explainable: The learned features cannot be used as explicit regularities to explain the data probability distribution. To resolve this issue, we propose in this paper a new machine learning theory, which describes in mathematics what are 'non-randomness' and 'regularities' in a data probability distribution. Our theory applies a spiking function to distinguish data samples from random noises. In this process, 'non-randomness', or a large amount of information, is encoded by the spiking function into regularities, a small amount of information. Then, our theory describes the application of multiple spiking functions to the same data distribution. In this process, we claim that the 'best' regularities, or the optimal spiking functions, are those who can capture the largest amount of information from the data distribution, and then encode the captured information into the smallest amount of information. By optimizing the spiking functions, one can achieve an explainable self-supervised learning system.
Primary Area: learning theory
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8245
Loading