Scene image representation by foreground, background and hybrid featuresOpen Website

2021 (modified: 02 Feb 2022)Expert Syst. Appl. 2021Readers: Everyone
Abstract: Highlights • We identify foreground, background, and hybrid deep features for scene images. • We aggregate all three types of deep features to represent the scene images. • We evaluate our proposed method using two commonly used benchmark datasets. Abstract Previous methods for representing scene images based on deep learning primarily consider either the foreground or background information as the discriminating clues for the classification task. However, scene images also require additional information (hybrid) to cope with the inter-class similarity and intra-class variation problems. In this paper, we propose to use hybrid features in addition to foreground and background features to represent scene images. We suppose that these three types of information could jointly help to represent scene images more accurately. To this end, we adopt three VGG-16 architectures pre-trained on ImageNet, Places, and Hybrid (both ImageNet and Places) datasets for the corresponding extraction of foreground, background and hybrid information. All these three types of deep features are further aggregated to achieve our final features for the representation of scene images. Extensive experiments on two large benchmark scene datasets (MIT-67 and SUN-397) show that our method produces the state-of-the-art classification performance.
0 Replies

Loading