Track: Full track
Keywords: Unsupervised learning, Human preferences, Perceptual fluency, Compression, Efficient coding
TL;DR: We use a weighted loss in passive representation learning that prioritizes compression to learn human-like perceptual preferences.
Abstract: We present prioritized representation learning (PRL), a method to enhance unsupervised representation learning by drawing inspiration from active learning and intrinsic motivations. PRL re-weights training samples based on an intrinsic priority function embodying preferences for certain inputs. We show how common human perceptual biases across different sensory modalities emerge through a priority function promoting compression and demonstrate the effects of biased early exposure on individual preferences. Our results reveal that PRL can mimic the results of active unsupervised learning even in the absence of active control over the input.
Submission Number: 54
Loading