HLSTransform: Energy-Efficient Llama 2 Inference on FPGAs Via High Level Synthesis

Published: 21 Jun 2024, Last Modified: 26 Jul 2024ES-FoMo-II 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: fpga, transformer, energy efficiency, llama-2, high level synthesis
TL;DR: We optimize the Llama-2 forward pass on a Field Programmable Gate Array (FPGA) and observe a considerable reduction in energy usage, measured in milliwatt-hours per token.
Abstract: GPUs have become the leading hardware accelerator for deep learning applications with wide use in transformer inference and training; however, the large energy requirements of GPUs pose issues in environmental costs, monetary operational costs, and limits usage in edge computing. We develop an accelerator for transformers, namely, Llama 2, an open-source state-of-the-art LLM, using high level synthesis (HLS) on Field Programmable Gate Arrays (FPGAs). HLS allows us to rapidly prototype FPGA designs without writing code at the register-transfer level (RTL). We name our method HLSTransform, and the FPGA designs we synthesize with HLS achieve up to a 12.75x reduction and 8.25x reduction in energy used per token on the Xilinx Virtex UltraScale+ VU9P FPGA compared to an Intel Xeon Broadwell E5-2686 v4 CPU and NVIDIA RTX 3090 GPU respectively, while increasing inference speeds by up to 2.46x compared to CPU and maintaining 0.53x the speed of an RTX 3090 GPU, despite the GPU’s 4 times higher base clock rate. With the lack of existing open-source FPGA accelerators for transformers, we open-source our code and document our steps for synthesis, which we hope will serve as a step in facilitating research into the use of FPGAs in transformer inference. The code can be found on https://github.com/HLSTransform/submission.
Submission Number: 46
Loading