Keywords: Sample compression, Generalization bounds, Hypernetworks, Meta-learning, Neural networks
TL;DR: We present a new meta-learning framework based on the learning of the reconstruction function in the sample compression theory.
Abstract: Reconstruction functions are pivotal in sample compression theory, a framework for deriving tight generalization bounds. From a small sample of the training set (the compression set) and an optional stream of information (the message), they recover a predictor previously learned from the whole training set. While usually fixed, we propose to learn reconstruction functions. To facilitate the optimization and increase the expressiveness of the message, we derive a new sample compression generalization bound for real-valued messages.
From this theoretical analysis, we then present a new hypernetwork architecture that outputs predictors with tight generalization guarantees when trained using an original meta-learning framework. The results of promising preliminary experiments are then reported.
Submission Number: 61
Loading