Keywords: ANN search, DiskANN, Filtered DiskANN, Vector Search, RAG, Filtered ANN
TL;DR: We propose a data-driven approach for filtered approximate nearest neighbor search that learns optimal trade-offs between vector distance and filter satisfaction, outperforming fixed-penalty methods by 5–10% in accuracy across diverse datasets.
Abstract: Filtered Approximate Nearest Neighbor (ANN) search retrieves the closest vectors for a query vector from a dataset. It enforces that a specified set of discrete labels $S$ for the query must be included in the labels of each retrieved vector. Existing graph-based methods typically incorporate filter awareness by assigning fixed penalties or prioritizing nodes based on filter satisfaction. However, since these methods use fixed, data independent penalties, they often fail to generalize across datasets with diverse label and vector distributions.
In this work, we propose a principled alternative that learns the optimal trade-off between vector distance and filter match directly from the data, rather than relying on fixed penalties. We formulate this as a constrained linear optimization problem, deriving weights that better reflect the underlying filter distribution and more effectively address the filtered ANN search problem. These learned weights guide both the search process and index construction, leading to graph structures that more effectively capture the underlying filter distribution and filter semantics.
Our experiments demonstrate that adapting the distance function to the data significantly improves accuracy by 5-10% over fixed-penalty methods, providing a more flexible and generalizable framework for the filtered ANN search
problem.
Submission Number: 31
Loading