High-precision RNS-CKKS on fixed but smaller word-size architectures: theory and application

Published: 01 Jan 2023, Last Modified: 16 May 2025WAHC@CCS 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: A prevalent issue in the residue number system (RNS) variant of the Cheon-Kim-Kim-Song (CKKS) homomorphic encryption (HE) scheme is the challenge of efficiently achieving high precision on hardware architectures with a fixed, yet smaller, word-size of bit length W, especially when the scaling factor satisfies log Δ > WIn this work, we introduce an efficient solution termed composite scaling. In this approach, we group multiple RNS primes as ql :=Π t-1 j=0 ql,j such that logql,j< W for 0 ≤ j < t, and use each composite ql in the rescaling procedure as ct →→ [ct/ql]. This strategy contrasts the traditional rescaling method in RNS-CKKS, where each ql is chosen as a single log Δ-bit prime, a method we designate as single scalingTo achieve higher precision in single scaling, where log Δ > W, one would either need a novel hardware architecture with word size W' > log Δ or would have to resort to relatively inefficient solutions rooted in multi-precision arithmetic. This problem, however, doesn't arise in composite scaling. In the composite scaling approach, the larger the composition degree t, the greater the precision attainable with RNS-CKKS across an extensive range of secure parameters tailored for workload deployment.We have integrated composite scaling RNS-CKKS into both OpenFHE and Lattigo libraries. This integration was achieved via a concrete implementation of the method and its application to the most up-to-date workloads, specifically, logistic regression training and convolutional neural network inference. Our experiments demonstrate that single and composite scaling approaches are functionally equivalent, both theoretically and practically
Loading