NCDCN: multi-focus image fusion via nest connection and dilated convolution network

Published: 01 Jan 2022, Last Modified: 12 Nov 2025Appl. Intell. 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this paper, a focus probability learning network called NCDCN is proposed for the multi-focus image fusion (MFIF) task. First, a dense network and a nest connection architecture are combined to construct an encoder alled MSDN to extract the multi-scale focus features with the same size at each level. Then, a dilated convolution-based inception network (DCIN) is designed as the decoder, which has a stronger feature aggregation ability with a small computation cost. Besides, an effective hybrid loss is introduced to effectively train our network. The fidelity loss with ℓ2 norm makes the focus probability approximate its ground-truth; the structural similarity loss makes the focus probability have better similarity in the edge between the focus and defocus regions; the intersection over union loss weakens the sensibility of the fidelity loss to the size of the focus region. Experimental results and analysis show the effectiveness of NCDCN and its superiority over other state-of-the-art methods.
Loading