The Effect of Representational Compression on Flexibility Across Learning in Humans and Artificial Neural Networks
Track: long paper (up to 10 pages)
Domain: cognitive science
Abstract: Humans can generalise from past experiences to novel situations as well as revise prior knowledge to flexibly adapt to changing contexts and goals. The representational geometry framework formalises how information is structured in the brain and suggests that abstraction involves a trade-off between generalisation and flexibility. However, how the geometry of task representations evolves across learning and how it corresponds to behaviour remains unclear. Here, we tested the hypothesis that task representations become compressed throughout learning, trading flexibility for generalisability. Using an extra-dimensional shifting task, we manipulated the pretraining length to control the degree of compression. In both humans and artificial neural networks, longer pretraining was associated with decreased flexibility. Analysis of network dynamics suggested that greater compression incurred a higher representational reorganisation cost, restricting flexibility. However, the introduction of an auxiliary reconstruction loss maintained higher dimensionality, mitigating the impairment of flexibility. Our findings point towards a representational geometry-based mechanism that explains how representational compression constrains flexibility, and preserving representational richness enhances flexibility.
Submission Number: 23
Loading