Abstract: In this paper, we propose a computational model of visual representative ness by integrating cognitive theories of representative ness heuristics with computer vision and machine learning techniques. Unlike previous models that build their representative ness measure based on the visible data, our model takes the initial inputs as explicit positive reference and extend the measure by exploring the implic it negatives. Given a group of images that contains obvious visual concepts, we create a customized image ontology consisting of both positive and negative instances by mining the most related and confusable neighbors of the positive concept in ontological semantic knowledge bases. The representative ness of a new item is then determined by its likelihoods for both the positive and negative references. To ensure the effectiveness of probability inference as well as the cognitive plausibility, we discover the potential prototypes and treat them as an intermediate representation of semantic concepts. In the experiment, we evaluate the performance of representative ness models based on both human judgements and user-click logs of commercial image search engine. Experimental results on both Image Net and image sets of general concepts demonstrate the superior performance of our model against the state-of-the-arts.
0 Replies
Loading