A neural geometry approach comprehensively explains apparently conflicting models of visual perceptual learning
Abstract: Visual perceptual learning (VPL), defined as long-term improvement in a visual task, is considered a crucial tool for elucidating underlying visual and brain plasticity. Previous studies have proposed several neural models of VPL, including changes in neural tuning or in noise correlations. Here, to adjudicate different models, we propose that all neural changes at single units can be conceptualized as geometric transformations of population response manifolds in a high-dimensional neural space. Following this neural geometry approach, we identified neural manifold shrinkage due to reduced trial-by-trial population response variability, rather than tuning or correlation changes, as the primary mechanism of VPL. Furthermore, manifold shrinkage successfully explains VPL effects across artificial neural responses in deep neural networks, multivariate blood-oxygenation-level-dependent signals in humans and multiunit activities in monkeys. These converging results suggest that our neural geometry approach comprehensively explains a wide range of empirical results and reconciles previously conflicting models of VPL. Previous studies have proposed conflicting models of visual perceptual learning. Leveraging deep neural network modelling, human functional MRI imaging and multiunit recordings in macaques, Cheng et al. introduce a neural geometry approach to reconcile past findings. They propose a unified theory of visual perceptual learning.
External IDs:doi:10.1038/s41562-025-02149-x
Loading