Keywords: Algorithmic Microscopy, Data-Driven Science, Vendi Scoring
TL;DR: We present the Vendiscope, a scalable algorithmic microscope that enables the systematic detection of outliers, duplicates, model failure modes, and memorization across domains.
Abstract: The evolution of microscopy, beginning with its invention in the late 16th century, has continuously enhanced our ability to explore and understand the microscopic world, enabling increasingly detailed observations of structures and phenomena. In parallel, the rise of data-driven science has underscored the need for sophisticated methods to explore and understand the composition of complex data collections. This paper introduces the $\textit{Vendiscope}$, the first $\textit{algorithmic microscope}$ designed to extend traditional microscopy to computational analysis. The Vendiscope leverages the Vendi scores -- a family of differentiable diversity metrics —- and assigns weights to data points based on their contribution to the overall diversity of the collection. These weights enable high-resolution data analysis at scale. We demonstrate this across biology and machine learning (ML). We analyzed the 250 million protein sequences in the protein universe, discovering that over 200 million are near-duplicates and that ML models like AlphaFold fail on proteins with Gene Ontology (GO) functions that contribute most to diversity. Additionally, the Vendiscope can be used to study phenomena such as memorization in generative models. We used the Vendiscope to identify memorized training samples from 13 different generative models spanning several model classes and found that the best-performing generative models often memorize the training samples that contribute least to diversity. Our findings demonstrate that the Vendiscope can serve as a powerful tool for data-driven science, providing a systematic and scalable way to identify duplicates and outliers, as well as pinpointing samples prone to memorization and those that models may struggle to predict—even before training.
Primary Area: interpretability and explainable AI
Submission Number: 22063
Loading