Abstract: High Dynamic Range (HDR) images can be reconstructed from multiple Low Dynamic Range (LDR) images using existing deep neural network (DNN) techniques. Despite notable advancements, DNN-based methods still exhibit ghosting artifacts when handling LDR images with saturation and significant motion. Recent Diffusion models (DMs) have been introduced in HDR imaging, showcasing promising performance, especially in achieving visually perceptible results. However, DMs typically require numerous inference iterations to recover the clean image from Gaussian noise, demanding substantial computational resources. Additionally, DM only learns a probability distribution of the added noise in each step but neglects image space constraints on HDR images, limiting distortion-based metrics. To tackle these challenges, we propose an efficient network that integrates DM modules into existing regression-based models, providing reliable content reconstruction for HDR while avoiding limitations in distortion-based metrics.
Loading