Trajectory-Based Neural Darwinism in CNNs: Variation, Competition, and Selective Retention

TMLR Paper5789 Authors

01 Sept 2025 (modified: 13 Sept 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Understanding how neural networks develop and stabilize their internal representations remains a central challenge in deep learning. Inspired by Edelman’s theory of Neural Darwinism, we investigate whether competitive dynamics analogous to neuronal group selection emerge in artificial neural networks during training. Through detailed trajectory analyses of neuron activations, weights, and cumulative representational change across convolutional neural networks (CNNs) including three-layer MLP-Net, ResNet-18, VGG-16, and ResNet-50, we uncover consistent patterns of variation, competition, and selective retention. Ablation studies reveal that networks tolerate removal of large fractions of neurons without accuracy degradation, indicating high redundancy; however, beyond a critical threshold, performance collapses as the core subset of task-critical neurons is disrupted. Across multiple datasets and architectures, neuron trajectory dynamics show that survived neurons sustain longer, more coherent representational paths, stronger weight norms, and higher activations, while eliminated neurons stagnate or fade toward representational silence. Overall, our findings are consistent with a Darwinian view of representation learning: CNNs exhibit robustness through redundancy at early stages, followed by selective consolidation of highly specialized neurons in deeper layers.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: N/A
Assigned Action Editor: ~Stanislaw_Kamil_Jastrzebski1
Submission Number: 5789
Loading