GAR: Carbon-Aware Routing for LLM Inference via Constrained Optimization

20 Sept 2025 (modified: 22 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM Routing, Carbon-Aware AI, Sustainable AI, Constrained Optimization AI
TL;DR: A routing framework that uses lightweight estimators to select the most carbon-efficient LLM for each query while satisfying strict accuracy and latency SLOs.
Abstract: The growing deployment of large language models (LLMs) makes per-request routing essential for balancing response quality and computational cost across heterogeneous model pools. Current routing methods rarely consider sustainable energy use and CO$2$ emissions as optimization objectives, despite grid carbon intensity varying by time and region, and models differing significantly in energy consumption. To address this gap, we introduce Green-Aware Routing (GAR), a constrained multi-objective optimization framework that minimizes per-request CO$2$ emissions subject to explicit accuracy floors and $p{95}$-latency service-level objectives (SLOs). GAR employs adaptive constraint optimization through per-dataset floor tuning and incorporates lightweight estimators for correctness, tail latency, and carbon emissions, enabling real-time routing decisions without additional inference passes. We present GAR-PD, an online primal-dual algorithm with $O(\sqrt{T})$ regret bounds alongside practical heuristic variants (GAR-Fixed, GAR-$\varepsilon$, GAR-Target) that achieve high feasibility coverage while limiting accuracy degradation. Comprehensive experiments across standard NLP benchmarks with heterogeneous LLM pools (7B–70B) demonstrate that GAR achieves substantial carbon reductions while maintaining competitive accuracy and $p{95}$ latency guarantees, providing a practical, theoretically grounded approach to sustainable LLM inference.
Primary Area: optimization
Submission Number: 23829
Loading