A Generalization Theory for Zero-Shot Prediction

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 oralEveryoneRevisionsBibTeXCC BY-NC-ND 4.0
TL;DR: We present a theoretical framework for zero-shot prediction by prompting, highlighting the conditional independence relationships supporting the success of this approach.
Abstract: A modern paradigm for generalization in machine learning and AI consists of pre-training a task-agnostic foundation model, generally obtained using self-supervised and multimodal contrastive learning. The resulting representations can be used for prediction on a downstream task for which no labeled data is available. We present a theoretical framework to better understand this approach, called zero-shot prediction. We identify the target quantities that zero-shot prediction aims to learn, or learns in passing, and the key conditional independence relationships that enable its generalization ability.
Lay Summary: Traditional machine learning approaches rely on creating models that adhere to a set of input-output examples. For data-scarce applications such as classifying medical images, there may not be enough such examples to produce a performant classifier. Zero-shot prediction is a method in which models that were trained for complex tasks can be combined and reused to create classifiers for certain applications, without the need for any additional labeled training examples. This modern, remarkable technique does not have the same level of mathematical understanding that the classical approach outlined above. We aim to address this gap by providing a theoretical model for zero-shot prediction in which the qualities of the data and task that make this method succeed and fail can be expressed and analyzed mathematically.
Link To Code: https://github.com/ronakdm/zeroshot
Primary Area: Theory->Learning Theory
Keywords: zero-shot, self-supervised learning, foundation models, learning theory, statistical theory
Submission Number: 4085
Loading