Do Coresets, Pruning, and Quantization Preserve Neural Network Representations? Exploring Geometry Trajectory Functional Alignment and Representation Similarity

NeurIPS 2025 Workshop NeurReps Submission149 Authors

05 Sept 2025 (modified: 29 Oct 2025)Submitted to NeurReps 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Representation similarity, Neural network compression, Coreset selection, Pruning, Quantization, Weight symmetry and geometry, Accuracy–similarity coupling
TL;DR: Symmetry and Geometry in Compressed NN Representations?
Abstract: Neural network compression techniques, such as coreset selection, pruning, and quantization, enable efficient deployment but often induce representational changes that traditional accuracy metrics fail to capture. We propose two complementary and generalizable metrics: Geometry-Trajectory-Functional Alignment (GTFA) and Representation Similarity (REPS). GTFA fuses weight geometry, activation subspace overlap, and confidence-weighted functional similarity, while REPS aggregates effective rank, neuron aliveness, class separation, and eigenvalue decay similarity, providing a multi-faceted evaluation of compression-induced representational degradation. Experiments on CIFAR-10 with ResNet-18 demonstrate that GTFA and REPS correlate strongly with accuracy drops (Pearson $r=0.806$ and $0.988$, respectively), substantially outperforming conventional baselines such as weight similarity ($r=0.141$) and prediction agreement. Layer-wise visualizations reveal dimensional collapse, neuron death, and class separation degradation, offering interpretable insights into representational integrity under compression. These results position GTFA and REPS as robust, lightweight diagnostic tools for guiding compression-aware model design and adaptive deployment in resource-constrained environments.
Submission Number: 149
Loading