High-speed secure random number generator co-processors for privacy-preserving machine learning

Published: 17 Oct 2024, Last Modified: 07 Dec 2024MLNCP PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Random number generation, hardware acceleration, AI, machine learning, co-processor, differential privacy, private training
TL;DR: A position paper arguing that high-speed, low power random number generation is a promising area for ML/AI accelerator development.
Abstract: As machine learning (ML) increasingly handles sensitive data, there is growing need for secure implementations of privacy-preserving techniques like differential privacy (DP). While random number generation is essential for ML applications, from basic operations to advanced privacy mechanisms, current solutions face a critical trade-off: modern pseudo-random number generators (PRNGs) are highly optimized for ML workloads, but lack the cryptographic guarantees required for secure real-world DP implementations. Our benchmark of private training shows a 43-530\% increase in single-step runtime after switching to a cryptographically-secure generator--even with available hardware acceleration. This result highlights a major gap in integration of secure RNGs into GPU-accelerated ML. In this position paper, we argue that dedicated hardware RNG co-processors could bridge this gap by providing high-throughput true random numbers from physical entropy sources while dramatically reducing power consumption compared to software implementations. Such co-processors would be especially valuable for on-device private learning and other edge AI applications where both security and energy efficiency are essential.
Submission Number: 41
Loading