Abstract: Existing visual saliency prediction methods mainly focus on single-modal visual saliency prediction, while ignoring the significant impact of text on visual saliency. To more comprehensively explore the influence of text on human attention in images, we propose a text-guided diffusion saliency prediction model, named TDiffSal. In specific, recent studies on stable diffusion models have shown promising performance in unifying tasks due to their inherent generalization ability. Inspired by this, a novel diffusion model for generalized visual-text saliency prediction is proposed, which formulates the prediction issue as a conditional generative task of the saliency map by employing input visual and text as the conditions. Meanwhile, we introduce a multi-head fusion module to effectively integrate text features and image features, which can efficiently guide the image denoising process and progressively refine the generated saliency map to make it semantically relevant to the text. Additionally, we employ an efficient pre-training strategy to enhance the robustness and generalization of the proposed model. We conduct extensive experiments on benchmark datasets to demonstrate its superior performance compared to other state-of-the-art methods.
Loading