DO CORESETS, PRUNING, AND QUANTIZATION PRESERVE NEURAL NETWORK REPRESENTATIONS?

Published: 02 Mar 2026, Last Modified: 16 Mar 2026ICLR 2026 Workshop GRaM PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: long paper (up to 8 pages)
Keywords: Representation similarity, Neural network compression, Coreset selection, Pruning, Quantization, Weight symmetry and geometry
Abstract: Neural network compression techniques, such as coreset selection, pruning, and quantization, enable efficient deployment but often induce representational changes that traditional accuracy metrics fail to capture. We propose Representation Similarity (REPS), a multi-faceted diagnostic metric that unifies effective rank, neuron aliveness, class separation, and eigenvalue decay similarity into a single interpretable score, providing comprehensive evaluation of compression-induced representational degradation. Experiments on CIFAR-10 with ResNet-18 demonstrate that REPS correlates strongly with accuracy drops (Pearson $r=0.988$), substantially outperforming conventional baselines such as weight similarity ($r=0.141$) and prediction agreement. We further provide a sensitivity analysis of REPS component weights and layer-wise analysis revealing dimensional collapse, neuron death, and class separation degradation, offering interpretable insights into representational integrity under compression. These results position REPS as a robust, lightweight diagnostic tool for guiding compression-aware model design and adaptive deployment in resource-constrained environments.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 69
Loading