- TL;DR: We investigated the changes in visual representations learnt by CNNs when using different linguistic labels
- Abstract: We investigated the changes in visual representations learnt by CNNs when using different linguistic labels (e.g., trained with basic-level labels only, superordinate-level only, or both at the same time) and how they compare to human behavior when asked to select which of three images is most different. We compared CNNs with identical architecture and input, differing only in what labels were used to supervise the training. The results showed that in the absence of labels, the models learn very little categorical structure that is often assumed to be in the input. Models trained with superordinate labels (vehicle, tool, etc.) are most helpful in allowing the models to match human categorization, implying that human representations used in odd-one-out tasks are highly modulated by semantic information not obviously present in the visual input.
- Code: https://github.com/ahnchive/19vscl.git
- Keywords: category learning, visual representation, linguistic labels, human behavior prediction
- Original Pdf: pdf