AIHWKIT-Lightning: A Scalable HW-Aware Training Toolkit for Analog In-Memory Computing

Published: 17 Oct 2024, Last Modified: 23 Nov 2024MLNCP PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: analog in-memory computing, hardware-aware training
TL;DR: We present a new toolkit for scalable hardware-aware training for analog in-memory computing. Our toolkit is faster and more memory efficient compared to the state-of-the-art.
Abstract: We introduce AIHWKIT-Lightning, a new toolkit designed for efficient and scalable hardware-aware training of large neural networks deployed on Analog In-Memory Computing (AIMC)-based hardware. The toolkit prioritizes speed and ease of use, addressing the limitations of existing frameworks in training Large Language Models (LLMs) with billions of parameters. AIHWKIT-Lightning leverages dedicated GPU kernels and a streamlined implementation, achieving up to 3.7x faster training at lower memory consumption compared to state-of-the-art toolkits. Benefiting from the increased scalability, we demonstrate near-iso-accuracy on the GLUE benchmark using a RoBERTa model trained on 11B tokens. The toolkit is publicly available at https://github.com/IBM/aihwkit-lightning.
Submission Number: 14
Loading