Keywords: dynamical stochastic networks, learnability, field theory, neural network
Abstract: A persistent puzzle appears across multiple fields, yet its solution continues to elude full understanding. How can a network of simple nodes, each evolving with only local information and learning with local rules, collectively solve complex global tasks? Such $\textit{dynamical stochastic networks}$ generalize cellular automata and recurrent neural networks, model biological circuits, and can be interpreted as decentralized multi-agent systems. We identify three fundamental challenges in the efficient learning of dynamical stochastic networks: (1) constructing precise yet easy-to-use theoretical models; (2) designing mechanisms for local credit assignment aligned with global objectives; and (3) characterizing the regimes of configurations that enable efficient learning. To address these, we adopt a theoretical framework of objective-driven dynamical stochastic fields, referred to as the $\textit{intelligent field}$, and propose theoretical quantities that capture learnability. Crucially, we show that efficient learning emerges when systems maximize their ability to retain information over time. Experiments demonstrate that local information retention links into global learnability, shedding light on the future practical design of effective dynamical stochastic networks.
Submission Number: 79
Loading