How Fragile Is Vision-Language Alignment? Mapping Concept Disruption Under Text-to-Image Personalization

Published: 05 May 2026, Last Modified: 13 May 20264th ALVR PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Text-to-Image Diffusion, Vision-Language Alignment, Model Robustness, Concept Personalization, Cross-Attention
TL;DR: We introduce Concept Entanglement Maps to reveal how personalizing diffusion models causes structured degradation of vision-language alignment, showing that cross-attention is highly fragile and abstract language is disproportionately disrupted.
Abstract: Text-to-image diffusion models learn a mapping from natural language to visual structure, but how robust is this mapping to perturbation? We use personalization—fine-tuning a model to learn a new face, object, or style—as a controlled stress test to probe the fragility of learned vision-language alignment. We find that fine-tuning for one concept systematically shifts the model’s ability to faithfully render unrelated concepts, and that this disruption follows structured, predictable patterns. To measure this fragility, we construct Concept Entanglement Maps: per-prompt, per-model disruption matrices that reveal which concepts are most affected and why. Using Stable Diffusion v1.5 as a controlled testbed, we evaluate 15 subjects across three personalization methods on 200 prompts and report three findings about the organization of vision-language alignment: (1) aggregate disruption is larger for vision-backbone and cross-attention perturbations than for text-embedding perturbations, despite the latter directly modifying the language representation; (2) abstract and compositional language is significantly more fragile than concrete, object-specific language; and (3) disruption does not follow semantic proximity—personalizing for a face does not preferentially disrupt other face-related prompts ($p = 1.0$), suggesting that alignment vulnerability is organized globally rather than purely by semantic category. These findings expose a structural vulnerability in current text-to-image personalization: the same cross-attention mechanism that enables compositional generalization also creates pathways through which local fine-tuning can propagate as global alignment shift.
Submission Number: 46
Loading