Abstract: In the twilight of Moore’s law, optimizing program performance has emerged as a central focus in computer architecture research. Yet, high-level source optimization remains challenging due to the intricate nature of understanding code semantics. Our approach unifies machine learning techniques with established insights and tools from computer architecture to tackle the inherent challenges of high-level optimization. In this work, we introduce a framework that harnesses large language models (LLMs) for high-level program optimization. We curate a dataset of competitive C++ submissions, each accompanied by extensive unit tests to capture performance-improving patterns. To mitigate the variability of performance measurements, we develop an evaluation harness using the gem5 full-system simulator. Our results show a mean speedup of 6.86, outperforming the average human optimization of 3.66×. We also give an overview of subsequent work in this space, describing how LLM-driven optimization enables autonomously applying performance-improving edits across billions of lines of code in Google data centers.
External IDs:dblp:journals/micro/ShypulaMZAGHNRBY25
Loading