Keywords: wavelets, GPT, multiscale, multimodal
TL;DR: improving transformer decoder based training by imposing multiscale structure inspired by wavelets on intermediate representation to show performance improvements across audio, text, music for next token prediction
Abstract: Large Language Models (LLMs) have ushered in a new wave of artificial intelligence advancements impacting every scientific field and discipline. We live in a world where most of the data around us, e.g., text, audio, and music, has a multi-scale structure. This paper infuses LLMs with a traditional signal processing idea, namely wavelets, during pre-training to take advantage of the structure. Without adding any extra parameters}to a GPT-style LLM architecture in an academic setup, we achieve the same pre-training performance almost twice as fast in text, audio, and images, by imposing a structure on intermediate embeddings. When trained for the same number of training steps, we achieve significant gains, comparable to pre-training a larger neural architecture. Further, we show this extends to the Long Range Arena benchmark and several input representations such as characters, BPE tokens, bytes, waveform, math expression, image pixels. Our architecture allows every next token prediction access to intermediate embeddings at different temporal resolutions in every decoder block. We hope this will pave the way for incorporating multi-rate signal processing instead of going after scale.
Primary Area: generative models
Submission Number: 24674
Loading