Abstract: Learned image compression (LIC) has attracted considerable attention due to its outstanding rate-distortion performance at the cost of high computational complexity. However, most existing LIC methods with symmetric, fixed-complexity encoder-decoder frameworks fail to accommodate varying encoding/ decoding complexity constraints across application scenarios, which poses significant deployment challenges. To address this, we propose the Complexity-Scalable Learned Image Compression (CSLIC) method. It leverages neural architecture search (NAS) and knowledge distillation to construct multiple encoder and decoder configurations with varying complexity levels, achieving complexity scalability and superior complexity-compression trade-offs. Specifically, we first construct a search space encompassing building block types and channel dimensions and introduce module-wise NAS algorithm into LIC task. Through parameter-shared supernet training and sampling, we identify the optimal synthesis transform and analysis transform architecture under different complexity constraints. Finally, combined with the knowledge distillation guided progressive training strategy, we implement a complexity-scalable learned image compression framework. Experimental results demonstrate that our proposed method achieves superior compression performance-complexity trade-offs, delivering BD-rate savings of -13.59% to -22.88% with computational complexities ranging from 97.65 to 283.54 KMACs/pixel. At the same complexity level, our method significantly outperforms existing SOTA methods while offering flexible adjustment of encoding and decoding complexity.
External IDs:doi:10.1109/tcsvt.2026.3670313
Loading