A General Theoretical Framework for Learning Smallest Interpretable Models

Published: 01 Jan 2024, Last Modified: 26 Jul 2025Proceedings of the 38th AAAI Conference on Artificial Intelligence (AAAI 2024)EveryoneRevisionsCC BY 4.0
Abstract: We develop a general algorithmic framework that allows us to obtain fixed-parameter tractability for computing smallest symbolic models that represent given data. Our framework applies to all ML model types that admit a certain extension property. By establishing this extension property for decision trees, decision sets, decision lists, and binary decision diagrams, we obtain that minimizing these fundamental model types is fixed-parameter tractable. Our framework even applies to ensembles, which combine individual models by majority decision.
Loading