Part-Aware Fine-Grained Object Categorization Using Weakly Supervised Part Detection NetworkDownload PDFOpen Website

2020 (modified: 27 Feb 2022)IEEE Trans. Multim. 2020Readers: Everyone
Abstract: Fine-grained object categorization aims for distinguishing objects of subordinate categories that belong to the same entry-level object category. It is a rapidly developing subfield in multimedia content analysis. The task is challenging due to the facts that (1) training images with ground-truth labels are difficult to obtain, and (2) variations among different subordinate categories are subtle. It is well established that characterizing features of different subordinate categories are located on local parts of object instances. However, manually annotating object parts requires expertise, which is also difficult to generalize to new fine-grained categorization tasks. In this work, we propose a Weakly Supervised Part Detection Network (PartNet) that is able to detect discriminative local parts for the use of fine-grained categorization. A vanilla PartNet builds on top of a base subnetwork two parallel streams of upper network layers, which respectively compute scores of classification probabilities (over subordinate categories) and detection probabilities (over a specified number of discriminative part detectors) for local regions of interest (RoIs). The image-level prediction is obtained by aggregating element-wise products of these region-level probabilities, and meanwhile diverse part detectors can be learned in an end-to-end fashion under the image-level supervision. To generate a diverse set of RoIs as inputs of PartNet, we propose a simple Discretized Part Proposals module (DPP) that directly targets for proposing candidates of discriminative local parts, with no bridging via object-level proposals. Experiments on benchmark datasets of CUB-200-2011, Oxford Flower 102 and Oxford-IIIT Pet show the efficacy of our proposed method for both discriminative part detection and fine-grained categorization. In particular, we achieve the new state-of-the-art performance on CUB-200-2011 and Oxford-IIIT Pet datasets when ground-truth part annotations are not available.
0 Replies

Loading