Abstract: This paper describes a method for pre-transforming images before image compression. This method aims to preserve image recognition accuracy using deep neural network techniques, even for highly compressed images. In general, images compressed with high compression ratios negatively affect recognition accuracy. Our method prevents this by using image pre-transformation, reducing the bitrate while maintaining recognition accuracy. A deep encoder-decoder network model is used as the pre-transformation model. The model is learnt with a new loss function that combines recognition loss and the loss that increases the spatial correlation. We evaluated the method with two benchmark datasets: ImageNet 2012 and CUB-200-2011. Compared with original images, the bitrates for the images our method transformed were reduced by 21.5% on ImageNet 2012 while maintaining equivalent recognition accuracy when encoding with the H.265/HEVC video coding standard.
Loading