Trajectory-Based Neural Darwinism in Convolutional Neural Networks: Variation, Competition, and Selective Retention
Keywords: Neural Darwinism, neuron trajectory, Neuron Darwinian Dynamics System, Convolutional Neural Networks
Abstract: Understanding how artificial neural networks develop and stabilize internal representations remains a central challenge in deep learning. Motivated by Edelman’s theory of Neural Darwinism, we investigate whether competitive, selection-like dynamics emerge during training and how they shape robustness and specialization. We introduce a unified trajectory-based Darwinian framework—the Neuron Darwinian Dynamics System (NDDS)—which is inspired by Darwinian principles of survival and selection, enabling the analysis of neuron activations, weights, and representational paths across diverse architectures and datasets. We conduct two complementary analyses: ablation experiments demonstrate that networks maintain accuracy under extensive neuron removal, revealing strong redundancy, yet exhibit sharp performance collapse beyond a critical threshold, identifying task-critical subsets. Dynamic trajectory analyses further reveal consistent evolutionary patterns: neurons categorized as survived sustain coherent representational trajectories, stronger weight norms, and higher activations, whereas eliminated neurons stagnate toward representational silence. Overall, these results support a Darwinian perspective on representation learning: CNNs achieve robustness through redundancy at early stages and progressively consolidate specialized neurons that underwrite stable, task-relevant representations.
Primary Area: learning theory
Submission Number: 5133
Loading