Learn to abstract via concept graph for weakly-supervised few-shot learningOpen Website

2021 (modified: 12 Apr 2022)Pattern Recognit. 2021Readers: Everyone
Abstract: Highlights • To the best of our knowledge, we are the first to introduce the concept graph and explore the concept hierarchy for addressing the WSFSL problem. • We propose a novel concept graph-based meta-learning framework, consisting of a multi-level conceptual abstraction-based regularization and a meta concept inference network. • Extensive experimental results are reported on two realistic datasets, namely, WS-ImageNet-Pure and WS-ImageNet-Mix, which demonstrate the effectiveness of the proposed framework. Abstract In recent years, a large number of meta-learning methods have been proposed to address few-shot learning problems and have shown superior performance. However, the explicit prior knowledge (e.g., concept graph) and weakly-supervised information are rarely explored in existing methods, which are usually free or cheap to collect. In this paper, we introduce a concept graph for the weakly-supervised few-shot learning, and propose a novel meta-learning framework, namely, MetaConcept. Our key idea is to learn a universal meta-learner inferring any-level classifier, so as to boost the classification performance of meta-learning on the novel classes. Specifically, we firstly propose a novel regularization with multi-level conceptual abstraction to train a universal meta-learner to infer not only an entity classifier but also a concept classifier at different levels via the concept graph (i.e., learn to abstract). Then, we propose a meta concept inference network as the universal meta-learner for the base learner, aiming to quickly adapt to a novel task by the joint inference of the abstract concepts and a few annotated samples. We have conducted extensive experiments on two weakly-supervised few-shot learning benchmarks, namely, WS-ImageNet-Pure and WS-ImageNet-Mix. Our experimental results show that (1) the proposed MetaConcept outperforms state-of-the-art methods with an improvement of 2% to 6% in classification accuracy; (2) the proposed MetaConcept is able to yield a good performance though merely training with weakly-labeled datasets.
0 Replies

Loading