ProCoSA: Probabilistic Concept Learning with Spatial Alignment

10 Sept 2025 (modified: 06 Dec 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: human–machine interaction,interpretable AI,concept bottleneck models,expectation–maximization
TL;DR: ProCoSA probabilistically infers missing concepts via spatial alignment, producing calibrated concept signals that improve interpretability and downstream accuracy under sparse supervision.
Abstract: Concepts are human-interpretable semantic units that enable intervenable intermediate representations in vision models. However, acquiring concept annotations is expensive and typically incomplete, limiting scalable interpretability. We propose \textbf{ProCoSA}, a probabilistic framework that treats missing concepts as latent variables and jointly infers concept posteriors and task predictions under partial supervision. To enhance spatial coherence and reduce pseudo-label bias, \textbf{ProCoSA} introduces a spatial alignment prior that encourages concept activations to align with salient image regions, yielding more calibrated concept probabilities for downstream reasoning. The framework integrates seamlessly into existing concept-to-task pipelines without relying on any specific bottleneck architecture. Experiments on four benchmark datasets under low concept supervision show that \textbf{ProCoSA} consistently matches or surpasses state-of-the-art methods on both concept and task performance under identical evaluation protocols. The code will be released upon acceptance.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 3727
Loading