Keywords: FPGA, System-on-Chip, Hardware Acceleration, Heterogeneous Computing
TL;DR: TANGRAM is a dataset containing performance statistics of thousands of heterogeneous systems-on-chips, with the goal of supporting research targeting the performance modeling and optimization of heterogeneous systems using machine learning.
Abstract: With the end of Moore's Law and Dennard Scaling, high-performance computing (HPC) architectures are evolving to include large Field Programmable Gate Arrays (FPGAs) to improve efficiency. Identifying the optimal configuration for such FPGAs, in terms of the number and type of CPUs, hardware accelerators, and memory channels, is crucial for the creation of efficient computing platforms. However, the complexity of the design space, the difficulty of modeling the interactions between the concurrently executed applications, and the strict time-to-market requirements fostered the use of heuristics to perform the exploration, thus leading to the identification of suboptimal solutions with no quality guarantees. To support the exploration of new systematic methodologies for the design of FPGA-based heterogeneous multi-core architectures, we present TANGRAM, a dataset composed of $40,000$ performance and resource consumption results of more than different designs, collected from two high-end FPGAs executing heterogeneous and concurrent applications. To assess the suitability of this dataset for machine-learning-based optimization strategies, we tested it with some baseline regression methodologies, showing the possibility of accurately predicting the performance of multiple applications running on the same system.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 21543
Loading