CycleFusion: Automatic Annotation and Graph-to-Graph Transaction Based Cycle-Consistent Adversarial Network for Infrared and Visible Image Fusion
Abstract: In the domain of infrared and visible image fusion, the objective is to extract prominent targets and intricate textures from source images to produce a fused image with heightened visual impact. While deep learning-based fusion methods offer the advantage of end-to-end fusion, their design complexities are compounded by the absence of ground truth. To address this challenge, we developed an annotated dataset for use in this study. Initially, we formulated and generated an annotated dataset specific to this field, building upon existing datasets. Subsequently, recognizing the image fusion process as a graph-to-graph translation, we designed a novel fusion model termed CycleFusion. Subsequent application of this method involved its training and testing on the annotated dataset. Through both qualitative and quantitative evaluations, our curated dataset demonstrated favorable visual enhancements and texture delineations. Comparative analysis revealed that CycleFusion outperformed 24 other state-of-the-art fusion models. Specifically, CycleFusion exhibited superior performance across metrics such as gradient-based fusion performance (Qab/f), cross entropy (CE), and Chen-Varshney metric (QCV), while ranking second in edge detection evaluation (ED). These results indicate that the proposed method yields fused outputs characterized by rich informational content, enhanced clarity, pronounced contrast, and ease of visual interpretation. Moreover, in terms of model parameters and runtime efficiency, it showcased commendable performance relative to alternative methodologies.
Loading