Classic Hebbian learning endows feed-forward networks with sufficient adaptability in challenging reinforcement learning tasksDownload PDFOpen Website

12 May 2023OpenReview Archive Direct UploadReaders: Everyone
Abstract: A common pitfall of current reinforcement learning agents implemented in computational models is in their inadaptability postoptimization. Najarro and Risi [Najarro E, Risi S. Proc 33rd Conf Neural Inf Process Systems (NeurIPS 2020). 2020: 20719–20731, 2020] demonstrate how such adaptability may be salvaged in artificial feed-forward networks by optimizing coefficients of classic Hebbian rules to dynamically control the networks’ weights instead of optimizing the weights directly. Although such models fail to capture many important neurophysiological details, allying the fields of neuroscience and artificial intelligence in this way bears many fruits for both fields, especially when computational models engage with topics with a rich history in neuroscience such as Hebbian plasticity.
0 Replies

Loading