Decoding Projections From Frozen Random Weights in Autoencoders: What Information Do They Encode?

Published: 23 Sept 2025, Last Modified: 17 Nov 2025UniReps2025EveryoneRevisionsBibTeXCC BY 4.0
Track: Extended Abstract Track
Keywords: random weights, neural networks, autoencoder
TL;DR: This study shows that neural networks with random, untrained weights can still capture meaningful information, as demonstrated through extensive experiments with autoencoders across multiple datasets and configurations.
Abstract: Despite the widespread use of gradient-based training, neural networks without gradient updates remain largely unexplored. To examine these networks, this paper utilizes an image autoencoder to decode embeddings from an encoder with fixed random weights. Our experiments span three datasets, six latent dimensions, and 28 initialization configurations. Through these experiments we demonstrate the capability of random weights to capture broad structural themes from the input and we make a case for their adoption as baseline models.
Submission Number: 69
Loading