Self-Supervised Representation Learning on Neural Network Weights for Model Characteristic PredictionDownload PDF

May 21, 2021 (edited Dec 24, 2021)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: Representation Learning, Self-Supervised Learning, Weight Space, Parameter Space, Augmentation, Model Zoos
  • TL;DR: This paper proposes to learn self-supervised representations of the weights of populations of NN models using novel data augmentations and an adapted transformer architecture.
  • Abstract: Self-Supervised Learning (SSL) has been shown to learn useful and information-preserving representations. Neural Networks (NNs) are widely applied, yet their weight space is still not fully understood. Therefore, we propose to use SSL to learn hyper-representations of the weights of populations of NNs. To that end, we introduce domain specific data augmentations and an adapted attention architecture. Our empirical evaluation demonstrates that self-supervised representation learning in this domain is able to recover diverse NN model characteristics. Further, we show that the proposed learned representations outperform prior work for predicting hyper-parameters, test accuracy, and generalization gap as well as transfer to out-of-distribution settings.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code: https://github.com/HSG-AIML/NeurIPS_2021-Weight_Space_Learning
17 Replies

Loading