Why Should the Server Do It All?: A Scalable, Versatile, and Model-Agnostic Framework for Server-Light DNN Inference over Massively Distributed Clients via Training-Free Intermediate Feature Compression
Keywords: model partitioning, intermediate feature compression, multi-device inference
Abstract: Modern DNNs often rely on edge–cloud model partitioning (MP), but widely used schemes fix shallow, static split points that underutilize edge compute and concentrate latency and energy on the server. The problem is exacerbated in autoregressive (AR) LLM inference, where per‑token forward passes repeatedly generate bulky intermediate features (IFs). We introduce SLICER, a retraining‑free, architecture‑agnostic framework that compresses IFs to reduce both communication and server load in split computing. SLICER combines (i) asymmetric top‑K filtering (ATKF) to sparsify low‑magnitude activations, (ii) magnitude‑splitting (MS) to group the remaining non‑zeros into equal‑cardinality blocks, and (iii) adaptive bit quantization (ABQ) that selects per‑block bitwidths under a distortion budget. Across standard vision and LLM workloads (e.g., ImageNet/COCO; HellaSwag, PIQA, ARC‑E/C, GSM8K, HumanEval), SLICER reduces uplink volume by up to 10× and server GPU time by up to 4.4×, while keeping task quality within ~0–3 pp of baseline. In multi‑device settings and AR LLMs, SLICER scales by shifting meaningful compute to the edge and lowering bits‑per‑token and server time per token, stabilizing per‑step traffic. The codec attaches to off‑the‑shelf models without retraining or architectural changes, offering a plug‑and‑play path to scalable, low‑latency distributed inference. Code is provided in the supplementary material.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 10391
Loading