Keywords: Co-Activation Patterns, Modality separability, Blockwise coordinate update, Backpropagation alternatives, Local learning rules
TL;DR: BP-free learning via co-activation patterns: layer-local updates with global coupling that match BP accuracy, train faster, and naturally support multimodal, loosely coupled sub-networks.
Abstract: Traditional end-to-end neural networks are designed to optimise the predictive accuracy of the final output layer, therefore rendering the network dependent on error backpropagation (BP) for training. Although BP has achieved remarkable success across a wide range of tasks, it has been criticised for its reliance on precise long-range gradient transmission, weight symmetry, and sequential learning constraints. Inspired by co-activation patterns (CAPs) in neuroscience, we propose a learning framework centred on separability of diffrent patterns to circumvent the dependence on BP. In this framework, the network “output” is redefined as a global activation state aggregated across layers, with the backbone regarded as a pattern extractor. Task discrimination is achieved through the evaluation of cosine similarity between CAPs. From an optimisation perspective, each layer updates its parameters using only its own partial derivatives. This removes the reliance on long-range gradient propagation. Simultaneously, global coupling across layers is maintained through fractional normalisation and inter-class competition. In addition, constraints on the co-activation patterns allow task-specific sub-networks to emerge spontaneously. More importantly, this framework readily extends to cross-modal integration and multimodal joint inference, enabling heterogeneous and independent sub-networks to operate in a loosely coupled manner via CAPs, without weight sharing or long-range gradient exchange. Experimental results across multiple datasets demonstrate that the proposed CAPs-based method achieves comparable accuracy to classical BP while significantly accelerating training.
Supplementary Material: zip
Primary Area: learning theory
Submission Number: 12770
Loading