Efficiently Serving Large Multimodal Models Using EPD Disaggregation

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: An efficient inference system for serving large multimodal systems using disaggregation.
Abstract: Large Multimodal Models (LMMs) extend Large Language Models (LLMs) by handling diverse inputs such as images, audio, and video, but at the cost of adding a multimodal encoding stage that increases both computational and memory overhead. This step negatively affects key Service Level Objectives (SLOs), such as time to first token (TTFT) and time per output token (TPOT). We introduce Encode-Prefill-Decode (EPD) Disaggregation, a novel framework that separates the encoding, prefill, and decode stages onto dedicated resources. Unlike current systems, which bundle encoding and prefill together, our approach decouples these steps, unlocking new opportunities and optimizations. These include a mechanism to cache multimedia tokens for efficient transfer, a novel way to parallelize the encoding load within a request, a module for optimal resource allocation for disaggregated serving, and a novel role-switching method to handle changing workload characteristics. Experimental evaluations with popular LMMs show substantial gains in memory efficiency (up to 15× lower peak memory utilization), batch sizes (up to 22× larger), 10× more images per request, and 2.2× larger KV caches. Furthermore, it leads to significant improvements in SLO attainment (up to 90–100% improvement) and TTFT (up to 71% reduction), compared to systems that do not disaggregate. The code is available at [https://github.com/vbdi/epdserve](https://github.com/vbdi/epdserve).
Lay Summary: Modern AI systems can now understand not just text, but also images 🖼️, audio 🔊, and video 🎥. These powerful tools—called **multimodal models**—power applications that can answer questions about pictures, assist with medical scans, or even analyze videos. ✨ However, **running these models is slow and memory-hungry**, especially when dealing with high-resolution images or complex inputs. That’s because each model request goes through several heavy processing steps, and current systems make all those steps share the same resources—leading to traffic jams inside the computer. Our work introduces a smarter way to run these models. We _break the process into three stages_—understanding the multimodal input 🖼️, preparing the response 🧮, and generating the output ✍️—and **assign each stage its own set of specialized GPUs**. This separation avoids bottlenecks🚦and lets the system run more smoothly and efficiently. 🚀 With our approach, the system can handle **10× more images**, use **15× less memory**, and respond **up to 71% faster** than current methods. This makes **advanced AI tools more practical** for real-world use—in areas like healthcare 🏥, creative work 🎨, and interactive digital assistants 🧑‍💻.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Link To Code: https://github.com/vbdi/epdserve
Primary Area: Optimization->Large Scale, Parallel and Distributed
Keywords: Efficient Inference, Large Multimodal Model Inference, Disaggregated Inference
Submission Number: 8290
Loading