Multi-focus image fusion via online convolutional sparse coding

Published: 01 Jan 2024, Last Modified: 11 Apr 2025Multim. Tools Appl. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Efficiently and perfectly eliminate out-of-focus pixels is still a persistent challenge for multi-focus image fusion. Previous approaches tend to focus on high quality fusion results while ignoring running cost. Online Convolutional Sparse Coding (OCSC) is an online version of Convolutional Sparse Coding (CSC) that discards expensive time and space costs associated with batch mode of CSC. In this paper, we use parallel version of OCSC to alleviate time-consuming defects of previous methods. Multi-focus gray and color images, are tested to verify superiority of the proposed method by obtaining excellent visual effects and exciting objective evaluations. The operating cost is roughly reduced by 95% over fusion method using online dictionary learning. A comprehensive analysis of subjectivity, objectivity and time show that our method has characteristics of fast fusion and high reconstruction quality.
Loading