Luthier: Bridging Auto-Tuning and Vendor Libraries for Efficient Deep Learning Inference

Yongin Kwon, Joohyoung Cha, Sehyeon Oh, Misun Yu, Jeman Park, Jemin Lee

Published: 30 Nov 2025, Last Modified: 23 Nov 2025ACM Transactions on Embedded Computing SystemsEveryoneRevisionsCC BY-SA 4.0
Abstract: Recent deep learning compilers commonly adopt auto-tuning approaches that search for the optimal kernel configuration in tensor programming from scratch, requiring tens of hours per operation and neglecting crucial optimization factors for parallel computing on asymmetric multicore processors. Meanwhile, hand-optimized inference libraries from hardware vendors provide high performance but lack the flexibility and automation needed for emerging models. To close this gap, we propose Luthier, which significantly narrows the search space by selecting the best kernel from existing inference libraries, and also employs cost model-based profiling to quickly determine the most efficient workload distribution for parallel computing. As a result, Luthier achieves up to 2.0x faster execution on convolution-based vision models and transformer-based language models (BERT, GPT) on both CPUs and GPUs, while reducing average tuning time by 95% compared with ArmNN, AutoTVM, Ansor, ONNXRuntime, and TFLite.
External IDs:doi:10.1145/3759916
Loading