Abstract: Despite detection Transformer (DETR)-like methods having improved end-to-end detection capabilities, they fundamentally struggle with the uniform processing of entire flattened feature maps, causing queries to attend to irrelevant regions. This results in redundant attention patterns, leading to computational burdens and inefficiencies in the detection of tiny objects in remote sensing imagery across dense and sparse settings. To address these limitations, we present a lightweight Transformer-based encoder-decoder architecture called dynamic adaptive region Transformer (DART). Specifically, density adaptive Transformer (DAT) employs an adaptive-region attention (ARA) mechanism that dynamically generates content-aware spatial regions rooted in feature density and semantic relevance. This strategy prioritizes computational resources on semantically rich areas while minimizing focus on irrelevant background regions. Region-aware decoder (RAD) incorporates a masked region-aware cross-attention (MRA) mechanism, where queries interact exclusively with the adaptive masked regions generated by the encoder, thereby reducing redundant focus on overlapping or irrelevant areas. Meanwhile, a query diversity loss is introduced to penalize overlapping attention patterns among queries, encouraging each query to focus on distinct and complementary regions. By adapting to data density and directing queries to essential areas within the image, DART enhances feature extraction and object localization for various-sized objects in dense and sparse settings. Experimental results demonstrate that DART achieves state-of-the-art performance on AI-TOD, DOTA-v2.0, and LEVIR-Ship benchmarks and exhibits strong generalization capabilities on DIOR while using only 13 million parameters and a computational cost of 68 GFLOPs.
External IDs:doi:10.1109/tgrs.2025.3582173
Loading