Abstract: Image compression approaches using implicit neural representation (INR) have recently gained attention for their lightweight nature, compactness, and fast decoding, showing promise for edge computing in consumer devices. Specifically, INR-based image compression methods implicitly store each image within a lightweight neural network, which serves as a compact representation of the image. However, most existing methods are limited to representing single-quality images with fixed-size models, which necessitates training separate models independently for images at varying quality levels, leading to additional training and storage costs. To tackle this problem, we propose a progressive image compression method based on Width-Depth Scalable Implicit Neural Representation (WDS-INR), which are composed of executable sub-networks of varying scales. By adjusting the scale of the sub-networks, WDS-INR can represent images at different quality levels while supporting progressive transmission. The scalable architecture of WDS-INR makes it well-suited for deployment on mobile and IoTs devices. Furthermore, we propose a band-limited initialization scheme that enhances both the representation capabilities and training stability of the WDS-INR. Finally, we introduce a meta-learning approach to the base sub-network to accelerate encoding $(4 \times \text { faster})$ . Experimental results demonstrate that the proposed method outperforms the baseline in rate-distortion performance $(+ 0.28~dB {~\text {PSNR}})$ , while enabling scalable bit-rates with progressive decoding.
External IDs:doi:10.1109/tce.2025.3565495
Loading