Consistency-Contrast Learning for Conceptual CodingOpen Website

2022 (modified: 18 Nov 2022)ACM Multimedia 2022Readers: Everyone
Abstract: As an emerging compression scheme, conceptual coding usually encodes images into structural and textural representations and decodes them in a deep synthesis fashion. However, existing conceptual coding schemes ignore the structure of deep texture representation space, leading to a challenge of establishing efficient and faithful conceptual representations. In this paper, we firstly introduce contrastive learning into conceptual coding and propose Consistency-Contrast Learning (CCL) which optimizes the representation space by a consistency-contrast regularization. By modeling the original images and reconstructed images as "positive'' pairs and random images in a batch as "negative'' samples, CCL aims to align texture representation space with source images space relatively. Extensive experiments on diverse datasets demonstrate that: (1) the proposed CCL can achieve the best compression performance on the conceptual coding task; (2) CCL is superior to other popular regularization methods towards improving reconstruction quality; (3) CCL is general and can be applied to other tasks related to representation optimization and image reconstruction, such as GAN inversion.
0 Replies

Loading