Abstract: Matrices are exceptionally useful in various fields of study as they provide a
convenient framework to organize and manipulate data in a structured manner.
However, modern matrices can involve billions of elements, making their storage
and processing quite demanding in terms of computational resources and memory
usage. Although prohibitively large, such matrices are often approximately low
rank. We propose an algorithm that exploits this structure to obtain a low rank
decomposition of any matrix A as A = LR, where L and R are the low rank
factors. The total number of elements in L and R can be significantly less than that
in A. Furthermore, the entries of L and R are quantized to low precision formats
– compressing A by giving us a low rank and low precision factorization. Our
algorithm first computes an approximate basis of the range space of A by randomly
sketching its columns, followed by a quantization of the vectors constituting
this basis. It then computes approximate projections of the columns of A onto
this quantized basis. We derive upper bounds on the approximation error of our
algorithm, and analyze the impact of target rank and quantization bit-budget. The
tradeoff between compression ratio and the approximation accuracy allows for
flexibility in choosing these parameters based on specific application requirements.
We empirically demonstrate the efficacy of our algorithm in nearest neighbor
classification of image and text embeddings. Our results illustrate that we can
achieve compression ratios as aggressive as one bit per matrix coordinate, all while
surpassing or maintaining the performance of traditional compression techniques
0 Replies
Loading