From Bits to Chips: An LLM-based Hardware-Aware Quantization Agent for Streamlined Deployment of LLMs

ICLR 2026 Conference Submission4197 Authors

11 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLMs, Agent, Model Compression
Abstract: Deploying models, especially large language models (LLMs), is becoming increasingly attractive to a broader user base, including those without specialized expertise. However, due to the resource constraints of certain hardware, maintaining high accuracy with larger models while meeting the hardware requirements remains a significant challenge. Model quantization techniques help mitigate memory and compute bottlenecks, yet the added complexities of tuning and deploying quantized models further exacerbate these challenges, making the process unfriendly to most users. We introduce the **Hardware-Aware Quantization Agent (HAQA)**, an automated framework that leverages LLMs to streamline the entire quantization and deployment process by enabling efficient hyperparameter tuning and hardware configuration, thereby simultaneously improving deployment quality and ease of use for a broad range of users. Our results demonstrate up to a **2.3×** speedup in inference, along with increased throughput and improved accuracy compared to unoptimized models on Llama. Additionally, HAQA is designed to implement adaptive quantization strategies across diverse hardware platforms, as it automatically finds optimal settings even when they appear counterintuitive, thereby reducing extensive manual effort and demonstrating superior adaptability.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 4197
Loading