- Abstract: Dimensionality reduction methods are unsupervised approaches which learn low-dimensional spaces where some properties of the initial space, typically the notion of “neighborhood”, are preserved. Such methods usually require propagation on large k-NN graphs or complicated optimization solvers. On the other hand, self-supervised learning approaches, typically used to learn representations from scratch, rely on simple and more scalable frameworks for learning. In this paper, we propose TLDR, a dimensionality reduction method for generic input spaces that is porting the recent self-supervised learning framework of Zbontar et al. (2021) to the specific task of dimensionality reduction, over arbitrary representations. We propose to use nearest neighbors to build pairs from a training set and a redundancy reduction loss to learn an encoder that produces representations invariant across such pairs. TLDR is a method that is simple, easy to train, and of broad applicability; it consists of an offline nearest neighbor computation step that can be highly approximated, and a straightforward learning process. Aiming for scalability, we focus on improving linear dimensionality reduction, and show consistent gains on image and document retrieval tasks, e.g. gaining +4% mAP over PCA on ROxford for GeM- AP, improving the performance of DINO on ImageNet or retaining it with a 10× compression.
- License: Creative Commons Attribution 4.0 International (CC BY 4.0)
- Submission Length: Long submission (more than 12 pages of main content)
- Changes Since Last Submission: We thank the reviewers and editors for accepting our manuscipt "as is" after the latest changes. The reviews and comments made our manuscript stronger. This is a de-anonymized camera-ready version where we further added a link to the public codebase of TLDR.
- Code: https://github.com/naver/tldr
- Assigned Action Editor: ~Brian_Kulis1