Multi-Exposure Image Fusion Using Cross-Attention Mechanism

Published: 01 Jan 2022, Last Modified: 13 Nov 2024ICCE 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Multi-exposure fusion (MEF) is a popular method for obtaining high dynamic range (HDR) image from multiple low dynamic range (LDR) images. Even though recent works have employed the convolutional neural networks (CNNs) for solving the MEF problem, there still remain various challenges, such as color distortion and detail loss, due to a limited receptive field. In this paper, we present a cross-attention module for multi-exposure image fusion. Different from existing CNN-based methods that capture the contexts of the local region in the target image, our method adaptively aggregates local features with global dependencies at all positions. Furthermore, we propose a detail compensation module as the feature fusion for restoring the loss (color and detail) in the saturation region. Our proposed network performs a feature extraction with an encoder, a fusion of a cross-attention module and a detail compensation module, and the fused image is reconstructed by a decoder. Experimental results show that compared with the state-of-the-art methods, the proposed method can obtain better performance in both the subjective and objective evaluation, particularly in terms of color expression and detail-preserving.
Loading