Einsum Benchmark: Enabling the Development of Next-Generation Tensor Execution Engines

Published: 26 Sept 2024, Last Modified: 13 Nov 2024NeurIPS 2024 Track Datasets and Benchmarks PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Einsum, Dataset, Benchmark, Tensor Operations, Contraction Paths, Tensor Networks
TL;DR: This paper introduces a comprehensive einsum dataset tailored for benchmarking tensor libraries and contraction path optimizers, uncovering performance issues, and facilitating their enhancement.
Abstract: Modern artificial intelligence and machine learning workflows rely on efficient tensor libraries. However, tuning tensor libraries without considering the actual problems they are meant to execute can lead to a mismatch between expected performance and the actual performance. Einsum libraries are tuned to efficiently execute tensor expressions with only a few, relatively large, dense, floating-point tensors. But, practical applications of einsum cover a much broader range of tensor expressions than those that can currently be executed efficiently. For this reason, we have created a benchmark dataset that encompasses this broad range of tensor expressions, allowing future implementations of einsum to build upon and be evaluated against. In addition, we also provide generators for einsum expressions and converters to einsum expressions in our repository, so that additional data can be generated as needed. The benchmark dataset, the generators and converters are released openly and are publicly available at https://benchmark.einsum.org.
Supplementary Material: pdf
Submission Number: 1160
Loading