Abstract: The Low Level Virtual Machine Intermediate Representation (LLVM IR) is a key component of modern compilers, valued for its universality and cross-platform adaptability. However, identifying optimal optimization passes across platforms remains challenging, as current autotuning frameworks struggle on mobile devices due to resource constraints and network variability. This paper introduces a novel LLVM IR performance optimization framework, LL VMTUNER. At its core is a deep neural network-based predictive model specifically designed to forecast the execution time of input passes on various platforms by analyzing IR features. By integrating this predictive model with the advanced autotuning framework, we enable a rapid and precise search for optimal pass list, removing the need for real-time latency measurements on target platforms and reducing data transmission between devices and cloud-based autotuning frameworks. LLVMTuner significantly enhances the performance of LLVM IR, providing a robust solution for efficient compiler optimizations across a spectrum of computing environments, from high-performance laptops to resource-constrained mobile devices. We evaluate LL VMTuNER on three heterogeneous platforms using over 600 LLVM IR benchmarks. The results show that LLVMTuner achieves an average speedup of 2.41x over the -03 optimization configuration. Additionally, LL VMTUNER reduces search overhead by 83.68% compared to OpenTuner.
Loading