Learning Structure from the Ground up---Hierarchical Representation Learning by ChunkingDownload PDF

Published: 31 Oct 2022, Last Modified: 19 Jan 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: Representation Learning, Structure Learning, Cognitive Science, Neuroscience
TL;DR: A gestalt-inspired learning algorithm that acquires interpretable representation from non-i.i.d. sequential data.
Abstract: From learning to play the piano to speaking a new language, reusing and recombining previously acquired representations enables us to master complex skills and easily adapt to new environments. Inspired by the Gestalt principle of \textit{grouping by proximity} and theories of chunking in cognitive science, we propose a hierarchical chunking model (HCM). HCM learns representations from non-i.i.d. sequential data from the ground up by first discovering the minimal atomic sequential units as chunks. As learning progresses, a hierarchy of chunk representations is acquired by chunking previously learned representations into more complex representations guided by sequential dependence. We provide learning guarantees on an idealized version of HCM, and demonstrate that HCM learns meaningful and interpretable representations in a human-like fashion. Our model can be extended to learn visual, temporal, and visual-temporal chunks. The interpretability of the learned chunks can be used to assess transfer or interference when the environment changes. Finally, in an fMRI dataset, we demonstrate that HCM learns interpretable chunks of functional coactivation regions and hierarchical modular and sub-modular structures confirmed by the neuroscientific literature. Taken together, our results show how cognitive science in general and theories of chunking in particular can inform novel and more interpretable approaches to representation learning.
Supplementary Material: pdf
26 Replies

Loading