High-Level Feature Guided Decoding for Semantic Segmentation

Published: 01 Jan 2024, Last Modified: 15 Nov 2024IEEE Trans. Circuits Syst. Video Technol. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Existing pyramid-based upsamplers (e.g. SemanticFPN), although efficient, usually produce less accurate results compared to dilation-based models when using the same backbone. This is partially caused by the contaminated high-level features since they are fused and fine-tuned with noisy low-level features on limited data. To address this issue, we propose to use powerful pre-trained $\boldsymbol {h}$ igh-level $\boldsymbol {f}$ eatures as $\boldsymbol {g}$ uidance (HFG) so that the upsampler can produce robust results. Specifically, only the high-level features from the backbone are used to train the class tokens, which are then reused by the upsampler for classification, guiding the upsampler features to more discriminative backbone features. One crucial design of the HFG is to protect the high-level features from being contaminated by using proper stop-gradient operations so that the backbone does not update according to the noisy gradient from the upsampler. To push the upper limit of HFG, we introduce a $\boldsymbol {c}$ ontext $\boldsymbol {a}$ ugmentation $\boldsymbol {e}$ ncoder (CAE) that can efficiently and effectively operate on the low-resolution high-level feature, resulting in improved representation and thus better guidance. We name our complete solution as the High-Level Feature Guided Decoder (HFGD). We evaluate the proposed HFGD on three benchmarks: Pascal Context, COCOStuff164k, and Cityscapes. HFGD achieves state-of-the-art results among methods that do not use extra training data, demonstrating its effectiveness and generalization ability.
Loading