Abstract: Most existing approaches to visual search reranking predominantly focus on mining information within the initial search results. However, the initial ranked list cannot provide enough cues for reranking by itself due to the typically unsatisfying visual search performance. This paper presents a new method for visual search reranking called CrowdReranking, which is characterized by mining relevant visual patterns from image search results of multiple search engines which are available on the Internet. Observing that different search engines might have different data sources for indexing and methods for ranking, it is reasonable to assume that there exist different search results yet certain common visual patterns relevant to a given query among those results. We first construct a set of visual words based on the local image patches collected from multiple image search engines. We then explicitly detect two kinds of visual patterns, i.e., salient and concurrent patterns, among the visual words. Theoretically, we formalize reranking as an optimization problem on the basis of the mined visual patterns and propose a close-form solution. Empirically, we conduct extensive experiments on several real-world search engines and one benchmark dataset, and show that the proposed CrowdReranking is superior to the state-of-the-art works.
0 Replies
Loading