Webly-supervised learning for salient object detectionOpen Website

2020 (modified: 13 May 2021)Pattern Recognit. 2020Readers: Everyone
Abstract: Highlights • We propose a novel webly-supervised learning method for salient object detection, which requires no pixel-wise annotations. • We introduce a novel quality evaluation method that can help to pick out images with high-quality masks for training. • We present a self-training approach to boost the performance of our network by selecting more hard web images for training. • Our method outperforms existing unsupervised methods, weakly supervised methods, and even fully-supervised approaches. Abstract End-to-end training of a deep CNN-Based model for salient object detection usually requires a huge number of training samples with pixel-level annotations, which are costly and time-consuming to obtain. In this paper, we propose an approach that can utilize large amounts of web data for learning a deep salient object detection model. With thousands of images collected from the Web, we first employ several bottom-up saliency detection techniques to generate salient object masks for all images, and then use a novel quality evaluation method to pick out a subset of images with reliable masks for training. After that, we develop a self-training approach to boost the performance of our initial network, which iterates between the network training process and the training set updating process. Importantly, different from existing webly-supervised or weakly-supervised methods, our approach is able to automatically select reliable images for network training without requiring any human intervention (e.g., dividing images into different difficulty levels). Results of extensive experiments on several widely-used benchmarks demonstrate that our method has achieved state-of-the-art performance. It significantly outperforms existing unsupervised and weakly-supervised salient object detection methods, and achieves competitive or even better performance than fully supervised approaches.
0 Replies

Loading