BiPrompt: Bilateral Prompt Optimization for Visual and Textual Debiasing in Vision Language Models

Published: 06 Nov 2025, Last Modified: 24 Dec 2025AIR-FM PosterEveryoneRevisionsBibTeXCC BY 4.0
Confirmation: I have read and agree with the workshop's policy on behalf of myself and my co-authors.
Keywords: Vision-Language Models, Test-Time Adaptation, Prompt Tuning, Debiasing, Causal Representation Learning, Visual Attention, Textual Normalization, Zero-Shot Learning, Out-of-Distribution Generalization
TL;DR: BiPrompt jointly debiases visual and textual representations in vision-language models using attention-guided erasure and balanced prompt normalization for causal and robust zero-shot generalization under distribution shifts.
Abstract: Vision language foundation models such as CLIP exhibit impressive zero-shot generalization yet remain vulnerable to spurious correlations across visual and textual modalities. Existing debiasing approaches often address a single modality either visual or textual leading to partial robustness and unstable adaptation under distribution shifts. We propose \textbf{a bilateral prompt optimization framework (BiPrompt)} that simultaneously mitigates non-causal feature reliance in both modalities during test-time adaptation. On the visual side, it employs structured attention-guided erasure to suppress background activations and enforce orthogonal prediction consistency between causal and spurious regions. On the textual side, it introduces balanced prompt normalization, a learnable re-centering mechanism that aligns class embeddings toward an isotropic semantic space. Together, these modules jointly minimize conditional mutual information between spurious cues and predictions, steering the model toward causal, domain invariant reasoning without retraining or domain supervision. Extensive evaluations on real-world and synthetic bias benchmarks demonstrate consistent improvements in both average and worst-group accuracies over prior test-time debiasing methods, establishing a lightweight yet effective path toward trustworthy and causally grounded vision-language adaptation.
Submission Track: Workshop Paper Track
Submission Number: 52
Loading