Abstract: Recent advances in deep learning from probability distributions successfully achieve classification or regression from distribution samples, thus invariant under permutation of the samples. The first contribution of the paper is the Dida distributional architecture, extending the state of the art to achieve invariance under permutation of the features, too. The Dida properties of universal approximation, and robustness with respect to bounded transformations of the input distribution, are established. The second contribution is to empirically demonstrate the merits of the Dida architecture on two tasks defined at the dataset level. The first task consists of predicting whether any two dataset patches are extracted from the same initial dataset. The second task consists of predicting whether a hyper-parameter configuration dominates another configuration, in terms of the learning performance of a fixed learning algorithm on a dataset extracted from the OpenML benchmarking suite. On both tasks, Dida outperforms the state of the art as well as models based on hand-crafted meta-features. The penultimate layer neurons can thus be viewed as learned meta-features, defining an accurate and computationally affordable description of datasets.
Keywords: Meta-features, Neural network architecture, AutoML
One-sentence Summary: Existing distributional-based neural network are extended to achieve invariance under permutation of the features, with theoretical guarantees of universal approximation and robustness, suitable for learning dataset meta-features.
Reproducibility Checklist: Yes
Broader Impact Statement: Yes
Paper Availability And License: Yes
Code Of Conduct: Yes
Reviewers: Herilalaina Rakotoarison, heri@lri.fr
Main Paper And Supplementary Material: pdf
Code And Dataset Supplement: https://anonymous.4open.science/r/dida-metafeatures-5FD5
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/distribution-based-invariant-deep-networks/code)
1 Reply
Loading