Optimizations of Neural Audio Coder Toward Perceptual Transparency

Published: 01 Jan 2024, Last Modified: 09 May 2025IEEE J. Sel. Top. Signal Process. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: This paper presents comprehensive optimizations of a neural audio coder built upon a variational autoencoder (VAE) system integrated with an arithmetic coder. Our optimizations focus on two primary aspects: a novel loss function design and advanced entropy modeling of bottleneck latent embeddings. The loss function design incorporates parameters from a psychoacoustic model (PAM) into the frame-wise distortion measure, providing excellent perceptual quality. In addition, a multi-time scale discriminator is utilized to minimize distortions across adjacent frames, reducing artifacts at frame edges. Also, the coder is optimized considering three sophisticated entropy models within the latent domain: the Factorized Entropy Model (FEM), the Hyperprior Model (HPM), and the Joint Hierarchical Model (JHM). Notably, the JHM enhances context modeling across frames to effectively predict components influenced by long-term dependencies. To verify the optimization performance, we conducted extensive experiments using a dataset consisting of commercial movie clips and two additional public datasets. Objective metrics consistently demonstrated that our optimized loss function and latent modeling achieved superior performance across all test datasets compared to traditional codecs such as LAME-MP3 and FDK-AAC. Subjective assessments also indicated that our system could offer comparable or superior auditory quality to FDK-AAC.
Loading