Abstract: This paper delves into methodologies that treat spiking architectures as continuously evolving dynamical systems, revealing intriguing parallels with the learning dynamics in the brain. The methods discussed in this paper addresses multiple challenges of training spiking architectures and highlights the necessity for bio-plausible local learning and increasing model scalability in spiking architectures. We begin by exploring an energy-based learning mechanism, namely Equilibrium Propagation (EP), which emphasizes the attainment of stable states by converging to energy minimas at each training phase, thus allowing for formulation of spatially and temporally local state and weight update rules. Subsequently, we delve into the synergy achieved by integrating the underlying energy-based convergent RNN architecture with a different energy-based model, namely modern Hopfield networks, thereby amplifying the capabilities of the resultant model. We further explore an efficient learning framework rooted in the convergence of the average spiking rates of neurons, which can be leveraged to advance the creation of highly scalable spiking architectures. The methodologies discussed allows spiking architectures to transition beyond simple vision-related tasks and develop solutions for complex sequence learning problems. Moreover, both the frameworks can be used to develop spiking architectures which can be deployed in neuromorphic hardware to realize their energy/power efficiency.
Loading