Towards aligned body representations in vision models

Published: 09 Nov 2025, Last Modified: 14 Nov 2025AAAI XAI4Science Workshop 2026EveryoneCC BY 4.0
Abstract: Human physical reasoning relies on internal “body” representations — coarse, volumetric approximations that capture an object’s extent and support intuitive predictions about motion and physics. While psychophysical evidence suggests humans use such coarse representations, their internal structure remains largely unknown. Here we test whether vision models trained for segmentation develop comparable representations. We adapt a psychophysical experiment conducted with 50 human participants to a semantic segmentation task and test a family of seven segmentation networks, varying in size. We find that smaller models naturally form human-like coarse body representations, whereas larger models tend toward overly detailed, fine-grain encodings. Our results demonstrate that coarse representations can emerge under limited computational resources, and that machine representations can provide a scalable path toward understanding the structure of physical reasoning in the brain.
Loading