The Case for Uncertainty-Governed Predictor Hierarchies in ML for Systems
Keywords: uncertainty, ML for systems, surrogate models, bayesian neural network, resource management
TL;DR: We propose using uncertainty-governed model hierarchy for computer system decision making, that maintains the accuracy of complex base models while using fast, interpretable surrogates for majority of system decisions.
Abstract: Machine learning integration in computer systems is limited by computational overhead and lack of interpretability. While system designers often turn to surrogate models like decision trees for their speed and transparency, they suffer from accuracy loss compared to base models. To bridge this gap, we propose an uncertainty-governed model hierarchy that uses model uncertainty to trigger fallbacks from a fast, interpretable surrogate to a high-accuracy base model. We evaluate this paradigm on a resource management case study using a Bayesian Neural Network (BNN) and a surrogate decision tree. Our results show that our hierarchy maintains BNN-level accuracy while utilizing the surrogate for upto 96\% of decisions. This reduces expected decision latency to 4.27ms from BNN's 31ms and provides high interpretability without settling for decreased system performance.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 12
Loading