Abstract: Advances in supervised learning approaches to object
recognition flourished in part because of the availability
of high-quality datasets and associated benchmarks. However, these benchmarks—such as ILSVRC—are relatively
task-specific, focusing predominately on predicting class
labels. We introduce a publicly-available dataset that embodies the task-general capabilities of human perception
and reasoning. The Human Similarity Judgments extension to ImageNet (ImageNet-HSJ) is composed of a large
set of human similarity judgments that supplements the existing ILSVRC validation set. The new dataset supports a
range of task and performance metrics, including evaluation of unsupervised algorithms. We demonstrate two methods of assessment: using the similarity judgments directly
and using a psychological embedding trained on the similarity judgments. This embedding space contains an order
of magnitude more points (i.e., images) than previous efforts based on human judgments. We were able to scale to
the full 50,000 image ILSVRC validation set through a selective sampling process that used variational Bayesian inference and model ensembles to sample aspects of the embedding space that were most uncertain. To demonstrate
the utility of ImageNet-HSJ, we used the similarity ratings
and the embedding space to evaluate how well several popular models conform to human similarity judgments. One
finding is that the more complex models that perform better
on task-specific benchmarks do not better conform to human semantic judgments. In addition to the human similarity judgments, pre-trained psychological embeddings and
code for inferring variational embeddings are made publicly available. ImageNet-HSJ supports the appraisal of internal representations and the development of more humanlike models.
0 Replies
Loading