Abstract: Previous work has shown that feature maps of deep convolutional neural networks (CNNs)
can be interpreted as feature representation of a particular image region. Features aggregated from
these feature maps have been exploited for image retrieval tasks and achieved state-of-the-art performances in
recent years. The key to the success of such methods is the feature representation. However, the different
factors that impact the effectiveness of features are still not explored thoroughly. There are much less
discussion about the best combination of them.
The main contribution of our paper is the thorough evaluations of the various factors that affect the
discriminative ability of the features extracted from CNNs. Based on the evaluation results, we also identify
the best choices for different factors and propose a new multi-scale image feature representation method to
encode the image effectively. Finally, we show that the proposed method generalises well and outperforms
the state-of-the-art methods on four typical datasets used for visual instance retrieval.
Conflicts: ia.ac.cn
Keywords: Computer vision, Deep learning
12 Replies
Loading