TL;DR: Our system Packrat improves CPU-based DNN inference for smaller models by automatically mapping batch-size and number of threads.
Abstract: In this paper, we investigate how to push the performance limits of serving Deep Neural Network (DNN) models on CPU-based servers. Specifically, we observe that while intra-operator parallelism across multiple threads is an effective way to reduce inference latency, it provides diminishing returns. Our primary insight is that instead of running a single instance of a model with all available threads on a server, running multiple instances each with smaller batch sizes and fewer threads for intra-op parallelism can provide lower inference latency. However, the right configuration is hard to determine manually since it is workload- (DNN model and batch size used by the serving system) and deployment-dependent (number of CPU cores on server). We present Packrat, a new serving system for online inference that given a model and batch size (π΅) algorithmically picks the optimal number of instances (π), the number of threads each should be allocated (π‘), and the batch sizes each should operate on (π) that minimizes latency. Packrat is built as an extension to TorchServe and supports online reconfigurations to avoid serving downtime. Averaged across a range of batch sizes, Packrat improves inference latency by 1.43Γ to 1.83Γ on a range of commonly used DNNs.
Lay Summary: Minimizing CPU-based inference latency for a given workload is challenging. Pure inter- and intra-op parallelism results in sub-optimal latency. Moreover, the best configuration depends on the model and the CPU hardware. Packrat solves this using an automated approach that combines selective profiling, an optimizer that estimates the performance of unprofiled configurations and suggests configurations to
minimize latency, and performs online reconfigurations to avoid serving downtime. Collectively, these let Packrat realize latency and throughput speedups of 1.43Γto 1.83Γ averaged across batch sizes on a range of common DNNs.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Link To Code: https://github.com/msr-fiddle/packrat
Primary Area: Applications->Energy
Keywords: CPU, Inference
Submission Number: 13296
Loading