Lightweight transformer-driven multi-scale trapezoidal attention network for saliency detection

Published: 01 Jan 2025, Last Modified: 24 Jul 2025Eng. Appl. Artif. Intell. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Highlights•The proposed network balances efficiency and performance via a lightweight design.•CFRBs with dilated convolutions enhance multi-scale contextual representations.•The introduced TAM effectively refines features by integrating ASCA and CCG.•FAMHA blocks capture long-range dependencies and global cues.•TRSNet outperforms SOTA methods across six saliency detection benchmarks.
Loading