Representation-Level Counterfactual Calibration for Debiased Zero-Shot Recognition

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Zero-Shot Recognition, Counterfactual Reasoning, Causal Inference, CLIP
TL;DR: We introduce a representation-level counterfactual framework to mitigate co-occurrence bias in vision–language models without retraining or prompt engineering.
Abstract: Object–context shortcuts remain a persistent challenge in vision‑language models, undermining zero‑shot reliability when test-time scenes diverge from familiar training co-occurrences. We recast this issue as a causal inference problem and ask: Would the prediction remain if the object appeared in a different environment? To answer it at inference time, we estimate object and background expectations within CLIP’s representation space, and synthesize counterfactual embeddings by recombining object features with diverse alternative contexts sampled from external datasets, batch neighbors, or text-derived descriptions. By estimating the Total Direct Effect and simulating intervention, we further subtract background‑only activation, preserving beneficial object–context interactions while mitigating hallucinated scores. Without retraining or prompt design, our method substantially improves both worst-group and average accuracy on context-sensitive benchmarks, establishing a new zero‑shot state of the art. Beyond performance, our framework provides a lightweight representation-level counterfactual approach, offering a practical causal avenue for debiased and reliable multimodal reasoning.
Supplementary Material: zip
Primary Area: General machine learning (supervised, unsupervised, online, active, etc.)
Submission Number: 22396
Loading