PAPER: Privacy-Preserving ResNet Models using Low-Degree Polynomial Approximations and Structural Optimizations on Leveled FHE

ICLR 2026 Conference Submission20883 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Privacy-Preserving Machine Learning, Leveled Fully Homomorphic Encryption, Polynomial Approximations
Abstract: Recent work has made *non-interactive privacy-preserving inference* more practical by running deep Convolution Neural Network (CNN) with Fully Homomorphic Encryption (FHE). However, these methods remain limited by their reliance on *bootstrapping*, a costly FHE operation applied across multiple layers, severely slowing inference. They also depend on *high-degree polynomial approximations* of non-linear activations, which increase multiplicative depth and reduce accuracy by 2–5% compared to plaintext ReLU models. In this work, we focus on ResNets, a widely adopted benchmark architecture in privacy-preserving inference, and close the accuracy gap between their FHE-based non-interactive models and plaintext counterparts, while also achieving faster inference than existing methods. We use a *quadratic polynomial approximation* of ReLU, which achieves the theoretical minimum multiplicative depth for non-linear activations, along with a penalty-based training strategy. We further introduce *structural optimizations* such as node fusing, weight redistribution, and tower reuse. These optimizations reduce the required FHE levels in CNNs by nearly a factor of five compared to prior work, allowing us to *run ResNet models under leveled FHE without bootstrapping*. To further accelerate inference and recover accuracy typically lost with polynomial approximations, we introduce parameter clustering along with a joint strategy of data encoding layout and ensemble techniques. Experiments with ResNet-18, ResNet-20, and ResNet-32 on CIFAR-10 and CIFAR-100 show that our approach achieves up to $4\times$ faster private inference than prior work with comparable accuracy to plaintext ReLU models.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 20883
Loading