COMET: Neural Cost Model Explanation Framework

Published: 27 Oct 2023, Last Modified: 23 Nov 2023NeurIPS XAIA 2023EveryoneRevisionsBibTeX
TL;DR: We develop the first framework, COMET, for generating faithful, generalizable, and intuitive explanations for x86 cost models.
Abstract: Cost models predict the cost of executing given assembly code basic blocks on a specific microarchitecture. Recently, neural cost models have been shown to be fairly accurate and easy to construct. They can replace heavily engineered analytical cost models used in compilers. However, their black-box nature discourages their adoption. In this work, we develop the first framework, COMET, for generating faithful, generalizable, and intuitive explanations for neural cost models. We generate and compare COMET’s explanations for the popular neural cost model, Ithemal against those for an accurate CPU simulation-based cost model, uiCA. We obtain an empirical inverse correlation between the prediction errors of Ithemal and uiCA and the granularity of basic block features in COMET’s explanations for them, indicating potential reasons for Ithemal’s higher error with respect to uiCA.
Submission Track: Full Paper Track
Application Domain: None of the above / Not applicable
Clarify Domain: ML for Computer Systems
Survey Question 1: We formulate the generation of ideal black-box explanations for x86 basic block cost models as an optimization problem. We systematically relax the intractable ideal computation into an efficient, domain-specific explanation algorithm, COMET. We observe that COMET's explanations are accurate, faithful, generalizable, intuitive, and a useful augmentation to x86 basic block cost models.
Survey Question 2: ML-based program cost models are fairly accurate in their cost predictions but may fail in performing correctly in certain corner cases. Hence, compiler and performance engineers are hesitant about deploying these models in practice. Explanations enable them to understand uninterpretable (ML) cost models in detail, debug them, and confidently deploy them for their workflows, without expecting any surprises.
Survey Question 3: Our explanation framework adapts the Anchors explanation algorithm for a part of its computations.
Submission Number: 34