Abstract: Text-to-image models generate highly realistic images
based on natural language descriptions and millions of users
use them to create and share images online. While it is expected that such models can align input text and generated image in the same latent space little has been done to understand
whether this alignment is possible between pathological speech
and generated images. In this work, we examine the ability of
such models to align dementia-related speech information with
the generated images and develop methods to explain this alignment. Surprisingly, we found that dementia detection is possible from generated images alone achieving 75% accuracy on
the ADReSS dataset. We then leverage explainability methods
to show which parts of the language contribute to the detection.
Loading