HyperSound: Generating Implicit Neural Representations of Audio Signals with HypernetworksDownload PDF

Published: 21 Oct 2022, Last Modified: 05 May 2023NeurIPS 2022 Workshop MetaLearn PosterReaders: Everyone
Keywords: implicit neural representations, hypernetworks, audio reconstruction
TL;DR: We show a meta-recipe for obtaining implicit neural representations of audio signals using hypernetworks.
Abstract: Implicit neural representations (INRs) are a rapidly growing research field, which provides alternative ways to represent multimedia signals. Recent applications of INRs include image super-resolution, compression of high-dimensional signals, or 3D rendering. However, these solutions usually focus on visual data, and adapting them to the audio domain is not trivial. Moreover, it requires a separately trained model for every data sample. To address this limitation, we propose HyperSound, a meta-learning method leveraging hypernetworks to produce INRs for audio signals unseen at training time. We show that our approach can reconstruct sound waves with quality comparable to other state-of-the-art models.
0 Replies

Loading