TL;DR: A novel framework that learns high-fidelity sparse embeddings for efficient representation
Abstract: Many large-scale systems rely on high-quality deep representations (embeddings) to facilitate tasks like retrieval, search, and generative modeling. Matryoshka Representation Learning (MRL) recently emerged as a solution for adaptive embedding lengths, but it requires full model retraining and suffers from noticeable performance degradations at short lengths. In this paper, we show that *sparse coding* offers a compelling alternative for achieving adaptive representation with minimal overhead and higher fidelity. We propose **Contrastive Sparse Representation** (**CSR**), a method that specifies pre-trained embeddings into a high-dimensional but *selectively activated* feature space. By leveraging lightweight autoencoding and task-aware contrastive objectives, CSR preserves semantic quality while allowing flexible, cost-effective inference at different sparsity levels. Extensive experiments on image, text, and multimodal benchmarks demonstrate that CSR consistently outperforms MRL in terms of both accuracy and retrieval speed—often by large margins—while also cutting training time to a fraction of that required by MRL. Our results establish sparse coding as a powerful paradigm for adaptive representation learning in real-world applications where efficiency and fidelity are both paramount. Code is available at [this URL.](https://github.com/neilwen987/CSR_Adaptive_Rep)
Lay Summary: Modern AI systems rely on "embeddings" - digital representations that capture the meaning of images, text, or other data. These embeddings need to work efficiently across devices with different computing capabilities, from powerful servers to mobile phones.
Our research introduces **Contrastive Sparse Representation** (**CSR**), a new technique that makes embeddings more adaptable without sacrificing quality. Unlike previous approaches that require complete retraining of AI models, CSR works with existing pre-trained embeddings and transforms them into a format where only the most important features are activated.
Think of CSR like compressing a high-resolution photo: you can choose different compression levels depending on your needs, with each level preserving the most important visual information. Similarly, CSR allows AI systems to adjust embedding sizes based on available resources while maintaining accuracy.
Our experiments with images, text, and combined data show that CSR outperforms previous methods in both accuracy and speed. It also requires significantly less training time, making it practical for real-world applications where both performance and efficiency matter.
Link To Code: https://github.com/neilwen987/CSR_Adaptive_Rep
Primary Area: General Machine Learning->Representation Learning
Keywords: Sparse Coding;Matryoshka representation learning;Adaptive Representation;Efficient Machine Learning; Sparse Autoencoder
Submission Number: 8786
Loading