Universally Quantized Neural CompressionDownload PDF

29 Aug 2020OpenReview Archive Direct UploadReaders: Everyone
Abstract: A popular approach to learning encoders for lossy compression is to use additive uniform noise during training as a differentiable approximation to test-time quan- tization. We demonstrate that a uniform noise channel can also be implemented at test time using universal quantization (Ziv, 1985). This allows us to eliminate the mismatch between training and test phases while maintaining a completely differentiable loss function. Implementing the uniform noise channel is a special case of a more general problem to communicate a sample, which we prove is com- putationally hard if we do not make assumptions about its distribution. However, the uniform special case is efficient as well as easy to implement and thus of great interest from a practical point of view. Finally, we show that quantization can be obtained as a limiting case of a soft quantizer applied to the uniform noise channel, bridging compression with and without quantization.
0 Replies

Loading