TL;DR: Puzzle uses large-scale NAS to optimize LLM inference for hardware, reducing costs and retaining nearly all performance. Demonstrated on an H100 GPU, achieving 2.17x throughput with minimal accuracy loss.
Abstract: Large language models (LLMs) offer remarkable capabilities, yet their high inference costs restrict wider adoption.
While increasing parameter counts improves accuracy, it also broadens the gap between state-of-the-art capabilities and practical deployability. We present **Puzzle**, a hardware-aware framework that accelerates the inference of LLMs while preserving their capabilities.
Using neural architecture search (NAS) at a large-scale, Puzzle optimizes models with tens of billions of parameters.
Our approach utilizes blockwise local knowledge distillation (BLD) for parallel architecture exploration and employs mixed-integer programming for precise constraint optimization.
We showcase our framework’s impact via Llama-3.1-Nemotron-51B-Instruct (Nemotron-51B) and Llama-3.3-Nemotron-49B, two publicly available models derived from Llama-70B-Instruct. Both models achieve a 2.17x inference throughput speedup, fitting on a single NVIDIA H100 GPU while retaining 98.4% of the original model's benchmark accuracies.
These are the most accurate models supporting single H100 GPU inference with large batch sizes, despite training on 45B tokens at most, far fewer than the 15T used to train Llama-70B.
Lastly, we show that lightweight alignment on these derived models allows them to surpass the parent model in specific capabilities.
Our work establishes that powerful LLM models can be optimized for efficient deployment with only negligible loss in quality, underscoring that inference performance, not parameter count alone, should guide model selection.
Lay Summary: Large language models (LLMs) are incredibly powerful but very expensive to run. Their massive size makes them difficult to deploy in real-world settings, especially when hardware resources are limited—like using just a single GPU or operating under strict memory constraints. Training a new model from scratch for each hardware setup is far too costly to be practical.
We present **Puzzle**, a method that adapts existing LLMs to specific hardware and usage scenarios, making them much faster to run while preserving nearly all of their accuracy. Puzzle works by breaking the original model into smaller building blocks and exploring leaner alternatives for each one—like swapping puzzle pieces. A mathematical solver then assembles the best combination of blocks to meet performance and hardware goals, and we briefly finetune the model to ensure all the parts work smoothly together.
With Puzzle, we built models that run more than twice as fast on an NVIDIA H100 GPU while maintaining over 98% of the original accuracy—and in some cases, even improving performance. Puzzle enables affordable, efficient deployment of large AI models, tailored to real-world needs without requiring massive compute budgets.
Primary Area: Deep Learning->Large Language Models
Keywords: Large Language Models, Neural Architecture Search with Distillation, Architecture Designed for Efficient Inference, Blockwise local Distillation, Hardware-Aware Optimization with Distillation
Submission Number: 2193
Loading