Sample-Efficient Mapspace Optimization for DNN Accelerators with Bayesian LearningDownload PDF

Published: 16 May 2023, Last Modified: 15 Jun 2023ASSYST OralReaders: Everyone
Keywords: mapspace search, hardware mappings, bayesian optimization, accelerators
TL;DR: Using Bayesian optimization to find find performant hardware mappings with few samples, and enabling transfer learning between hardware configurations
Abstract: Achieving high performance for machine learning domain-specific accelerators requires the careful choice of a mapping from an algorithm to an accelerator. Most algorithms for finding mappings either optimize over a coarse performance \emph{model} or experimentally evaluate the performance of a large number of different mappings on the space. However the number of samples required by these empirical models can be prohibitive in settings (e.g. when using cycle-accurate simulators) where evaluations are expensive. This paper evaluates the use of Bayesian optimization based approaches for finding mappings for hardware accelerators in settings where high sample efficiency is required. Our approaches converge to mappings comparable to that of Timeloop's mapper while requiring an order of magnitude fewer iterations. Furthermore, our method produces surrogate models that can be used for transfer learning to new hardware configurations, further reducing the sample complexity by roughly a factor of $2$.
Workshop Track: MLArchSys
Presentation: In-Person
Presenter Full Name: Grace Dinh
Presenter Email: dinh@berkeley.edu
Presenter Bio: Grace Dinh is a PhD student at UC Berkeley advised by James Demmel. Her research interests include algorithms and programming systems for accelerators, as well as lower bounds for realistic computational models.
4 Replies

Loading