Effects of Linguistic Labels on Learned Visual Representations in Convolutional Neural Networks: Labels matter!Download PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: We investigated the changes in visual representations learnt by CNNs when using different linguistic labels
Abstract: We investigated the changes in visual representations learnt by CNNs when using different linguistic labels (e.g., trained with basic-level labels only, superordinate-level only, or both at the same time) and how they compare to human behavior when asked to select which of three images is most different. We compared CNNs with identical architecture and input, differing only in what labels were used to supervise the training. The results showed that in the absence of labels, the models learn very little categorical structure that is often assumed to be in the input. Models trained with superordinate labels (vehicle, tool, etc.) are most helpful in allowing the models to match human categorization, implying that human representations used in odd-one-out tasks are highly modulated by semantic information not obviously present in the visual input.
Code: https://github.com/ahnchive/19vscl.git
Keywords: category learning, visual representation, linguistic labels, human behavior prediction
Original Pdf: pdf
10 Replies

Loading