Abstract: We propose an end-to-end learned scalable multilayer feature compression method. Our proposed method is illustrated in Figure 1 , where f 1 ,… ,f n stand for deep features at different layers. The deep feature f n denoting base layer is first transformed and quantized into the latent ${\hat y_n}$ . The latent ${\hat y_n}$ is then inversely transformed to reconstruct the feature as f ˆ n . In addition, the latent ${\hat y_n}$ is also fed into the entropy model of the previous-layer feature f n−1 as conditional information for the enhancement layer. The entropy model of the feature f n−1 takes both ${\hat y_{n - 1}}$ and ${\hat y_n}$ as inputs to improve compression efficiency.
Loading