Cross-Level Feature Relocation: Mitigating Information Loss in Cross-Layer Feature Fusion for Crowd Counting

Published: 05 Sept 2024, Last Modified: 16 Oct 2024ACML 2024 Conference TrackEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Computer vision; Crowd counting; Mmulti-scale features
Verify Author List: I have double-checked the author list and understand that additions and removals will not be allowed after the submission deadline.
TL;DR: An Effective Method to Deal with the Scale Variation in Crowd Counting
Abstract: In crowd counting, significant challenges persist due to scale variation, occlusion, and complex scene interference. Merging feature maps from different levels of the backbone network is an intuitive and efficient approach to addressing these issues. However, existing multi-scale merging algorithms often overlook a critical aspect: feature maps at different levels typically have varying resolutions, and traditional interpolation-based methods for feature fusion result in significant information loss, limiting the algorithm's multi-scale perception capability. To address this issue, we propose the Cross-Level Feature Relocation Module (CFRM), which regresses features across different levels into a unified representation space and utilizes a cross-level attention mechanism to transfer complementary information from low-resolution to high-resolution feature maps, significantly enhancing effective information utilization. Based on CFRM, we introduce the Cross-Level Feature Relocation Network (CFRNet), which exhibits strong multi-scale perception capabilities. Extensive experiments on five datasets and comprehensive ablation studies demonstrate the effectiveness of CFRM.
A Signed Permission To Publish Form In Pdf: pdf
Primary Area: Applications (bioinformatics, biomedical informatics, climate science, collaborative filtering, computer vision, healthcare, human activity recognition, information retrieval, natural language processing, social networks, etc.)
Paper Checklist Guidelines: I certify that all co-authors of this work have read and commit to adhering to the guidelines in Call for Papers.
Student Author: Yes
Submission Number: 381
Loading