Synthesizing Images on Perceptual Boundaries of ANNs for Uncovering and Manipulating Human Perceptual Variability

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: By sampling along neural network perceptual boundaries, we generated images that induce high variability in human decision and can predict and manipulate individual behavior on these samples.
Abstract: Human decision-making in cognitive tasks and daily life exhibits considerable variability, shaped by factors such as task difficulty, individual preferences, and personal experiences. Understanding this variability across individuals is essential for uncovering the perceptual and decision-making mechanisms that humans rely on when faced with uncertainty and ambiguity. We propose a systematic Boundary Alignment Manipulation (BAM) framework for studying human perceptual variability through image generation. BAM combines perceptual boundary sampling in ANNs and human behavioral experiments to systematically investigate this phenomenon. Our perceptual boundary sampling algorithm generates stimuli along ANN perceptual boundaries that intrinsically induce significant perceptual variability. The efficacy of these stimuli is empirically validated through large-scale behavioral experiments involving 246 participants across 116,715 trials, culminating in the variMNIST dataset containing 19,943 systematically annotated images. Through personalized model alignment and adversarial generation, we establish a reliable method for simultaneously predicting and manipulating the divergent perceptual decisions of pairs of participants. This work bridges the gap between computational models and human individual difference research, providing new tools for personalized perception analysis. Code and data for this work are publicly available.
Lay Summary: Human decision-making in everyday tasks varies widely due to differences in difficulty, personal preferences, and past experiences. Understanding why and how people perceive things differently helps us learn more about how the brain processes uncertain or ambiguous information. In this work, we introduce a new approach to study these differences by creating special images that reveal how people’s perceptions change. We tested these images with hundreds of volunteers and collected a large dataset to better understand individual perception patterns. Our method also allows us to predict and influence how different people might see the same image differently. This research helps connect computer models with human behavior, offering new ways to analyze how perception varies from person to person. All related data and code are shared openly for others to explore.
Link To Code: https://eaterminator.github.io/BAM/
Primary Area: Applications->Neuroscience, Cognitive Science
Keywords: Perceptual variability, Object Recognition, Behavior Manipulation, Behavioral Alignment
Flagged For Ethics Review: true
Submission Number: 7288
Loading