Lightweight Width-Depth Scalable Implicit Neural Representation for Progressive Image Compression

Qingyu Mao, Wenming Wang, Yongsheng Liang, Chenhu Xiao, Fanyang Meng, Gwanggil Jeon

Published: 01 Jan 2025, Last Modified: 10 Nov 2025IEEE Transactions on Consumer ElectronicsEveryoneRevisionsCC BY-SA 4.0
Abstract: Image compression approaches using implicit neural representation (INR) have recently gained attention for their lightweight nature, compactness, and fast decoding, showing promise for edge computing in consumer devices. Specifically, INR-based image compression methods implicitly store each image within a lightweight neural network, which serves as a compact representation of the image. However, most existing methods are limited to representing single-quality images with fixed-size models, which necessitates training separate models independently for images at varying quality levels, leading to additional training and storage costs. To tackle this problem, we propose a progressive image compression method based on Width-Depth Scalable Implicit Neural Representation (WDS-INR), which are composed of executable sub-networks of varying scales. By adjusting the scale of the sub-networks, WDS-INR can represent images at different quality levels while supporting progressive transmission. The scalable architecture of WDS-INR makes it well-suited for deployment on mobile and IoTs devices. Furthermore, we propose a band-limited initialization scheme that enhances both the representation capabilities and training stability of the WDS-INR. Finally, we introduce a meta-learning approach to the base sub-network to accelerate encoding $(4 \times \text { faster})$ . Experimental results demonstrate that the proposed method outperforms the baseline in rate-distortion performance $(+ 0.28~dB {~\text {PSNR}})$ , while enabling scalable bit-rates with progressive decoding.
Loading