Keywords: Edge-Cloud Speculative Decoding, Conformal Prediction, Lattice Quantization
TL;DR: We propose a communication-efficient speculative decoding framework for edge-cloud scenarios that reduces end-to-end latency.
Abstract: Edge–cloud speculative decoding (SD) accelerates inference by having a cloud-based large language model (LLM) that verifies draft tokens generated by a resource-constrained small language model (SLM) at the edge. A central bottleneck is the \textit{limited bandwidth of the edge–cloud link}, which necessitates efficient compression of draft token distributions. We first derive an information-theoretic bound that decomposes the token resampling rate into contributions from SLM–LLM distribution mismatch and from quantization distortion. Guided by this analysis, we propose the Sparse Quantize-and-Sample SD (SQS-SD) framework, which exploits distributional sparsity through structured sparsification and lattice-based quantization. Within this framework, $K$-SQS applies fixed top-$K$ truncation, while C-SQS adaptively adjusts the retained token set via online \textit{conformal prediction} to ensure bounded deviation from the dense distribution. Empirical results confirm that both approaches improve end-to-end latency and {resampling rate} in complimentary operating regimes.
Submission Number: 57
Loading