Yo'LLaVA: Your Personalized Language and Vision Assistant

Published: 25 Sept 2024, Last Modified: 18 Nov 2024NeurIPS 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: personalization, multimodal models
TL;DR: We introduce Yo'LLaVA -- a framework to embed personalized subjects (e.g., your pet) into a comprehensible prompt for Large Multimodal Models (e.g., LLaVA)
Abstract: Large Multimodal Models (LMMs) have shown remarkable capabilities across a variety of tasks (e.g., image captioning, visual question answering). While broad, their knowledge remains generic (e.g., recognizing a dog), and they are unable to handle personalized subjects (e.g., recognizing a user's pet dog). Human reasoning, in contrast, typically operates within the context of specific subjects in our surroundings. For example, one might ask, "What should I buy for *my dog*'s birthday?"; as opposed to a generic inquiry about "What should I buy for *a dog*'s birthday?". Similarly, when looking at a friend's image, the interest lies in seeing their activities (e.g., "*my friend* is holding a cat"), rather than merely observing generic human actions (e.g., "*a man* is holding a cat"). In this paper, we introduce the novel task of personalizing LMMs, so that they can have conversations about a specific subject. We propose Yo'LLaVA, which learns to embed a personalized subject into a set of latent tokens given a handful of example images of the subject. Our qualitative and quantitative analyses reveal that Yo'LLaVA can learn the concept more efficiently using fewer tokens and more effectively encode the visual attributes compared to strong prompting baselines (e.g., LLaVA).
Primary Area: Machine vision
Submission Number: 1139
Loading