MESS+: Energy-Optimal Inferencing in Language Model Zoos with Service Level Guarantees

Published: 10 Oct 2024, Last Modified: 19 Nov 2024AFM 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Model Zoos, Online Optimization, Service Level Agreements
TL;DR: We introduce MESS+, an online stochastic optimization algorithm that selects energy-efficient large language models per inference request, achieving up to 2.5× energy savings while maintaining SLA-defined constraints.
Abstract: Open-weight large language model (LLM) zoos allow users to quickly integrate state-of-the-art models into systems. Despite increasing availability, selecting the most appropriate model for a given task still largely relies on public benchmark leaderboards and educated guesses. This can be unsatisfactory for both inference service providers and end users, where the providers usually prioritize cost efficiency, while the end users usually prioritize model output quality for their inference requests. In commercial settings, these two priorities are often brought together in Service Level Agreements (SLA). We present MESS+, an online stochastic optimization algorithm for energy-optimal model selection from a model zoo, which works on a per-inference-request basis. For a given SLA that requires high accuracy, we are up to 2.5x more energy efficient with MESS+ than with randomly selecting an LLM from the zoo while maintaining SLA quality constraints.
Submission Number: 94
Loading