Abstract: There are shadow and highlight regions in a low-dynamic-range (LDR) image which is captured from a high-dynamic-range (HDR) scene. It is an ill-posed problem to restore the saturated regions of the LDR image. In this article, the saturated regions of the LDR image are restored by fusing model-based and data-driven approaches. With such a neural augmentation, two synthetic LDR images are first generated from the underlying LDR image via the new model-based approach. It relaxes the requirement of mapping integers to integers and improves the modeling accuracy. One is brighter than the input image to restore the shadow regions and the other is darker than the input image to restore the highlight regions. Both synthetic images are then refined via one single exposedness-aware saturation restoration network (EASRN). Finally, the two synthetic images and the input image are combined together via an HDR synthesis algorithm or a multiscale exposure fusion (MEF) algorithm. Experimental results indicate that the proposed algorithm outperforms existing algorithms in terms of HDR-VDP-3. The proposed algorithm can be embedded in any smartphones or digital cameras to produce an information-enriched LDR image.
Loading