Enhancing Visual Understanding by Removing Dithering with Global and Self-Conditioned Transformation
Abstract: PNG-8 images are commonly used on the web due to their small size, but their limited color palette often leads to dithering artifacts. Unfortunately, restoring these images using a conventional convolutional neural network (CNN) often results in suboptimal performance since the spatial distribution of dithering is not uniform across the image. This is because the convolutional operator is spatially consistent, meaning it applies the same kernel to all pixels, which we refer to as a global transformation. To address this issue, we propose PNG8IRNet, one approach that combines global and self-conditioned transformations to remove dithering artifacts. Our method incorporates a multilayer perceptron (MLP) to generate diverse kernels for each pixel, taking into account the spatial non-uniformity of dithering, which we define as a self-conditioned transformation. PNG8IRNet demonstrates its performance on multiple datasets, substantially enhancing visual comprehension through a comprehensive set of experiments.
0 Replies
Loading