Keywords: Network Neuroscience, Multi-Task Learning, Modularity, Connectome Analysis, Information Theory
TL;DR: Neuronal networks achieve optimal multi-task learning performance at moderate modularity levels, and we use information theory to explain why both over-modularized and under-modularized architectures fail at knowledge transfer.
Abstract: While highly modular biological neural networks excel at multi-domain cognitive processing, the computational principles underlying this evolutionary advantage remain unexplored. This study systematically quantifies how network modularity determines multi-task learning capabilities across diverse computational tasks. First, we analyzed connectome data and reveal their pronounced modular organization across all species. Further, to examine modularity's computational role, we designed a multi-task learning framework using structurally constrained recurrent neural networks trained on diverse data sets. Our key finding reveals a non-monotonic relationship between network modularity and multi-task learning performance. Performance degraded significantly at extreme modularity levels. Critically, single-task learning showed no systematic relationship with modularity, indicating that modular advantages are specific to scenarios requiring cross-task information flow.
Moreover, we develop an information-theoretic framework that proves cross-module mutual information exhibits quadratic dependence on modularity. These findings provide quantitative insights into biological neural organization and offer design principles for artificial intelligence systems across multiple tasks.
Primary Area: applications to neuroscience & cognitive science
Submission Number: 9412
Loading