Prioritizing Compression Explains Human Perceptual Preferences

Published: 09 Oct 2024, Last Modified: 02 Dec 2024NeurIPS 2024 Workshop IMOL PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: Full track
Keywords: Unsupervised learning, Human preferences, Perceptual fluency, Compression, Efficient coding
TL;DR: We use a weighted loss in passive representation learning that prioritizes compression to learn human-like perceptual preferences.
Abstract: We present prioritized representation learning (PRL), a method to enhance unsupervised representation learning by drawing inspiration from active learning and intrinsic motivations. PRL re-weights training samples based on an intrinsic priority function embodying preferences for certain inputs. We show how common human perceptual biases across different sensory modalities emerge through a priority function promoting compression and demonstrate the effects of biased early exposure on individual preferences. Our results reveal that PRL can mimic the results of active unsupervised learning even in the absence of active control over the input.
Submission Number: 54
Loading