Abstract: This paper proposes an enhanced application-driven image fusion framework to improve final application results. This framework is based on a deep learning architecture that generates fused images to better align with the requirements of applications such as semantic segmentation and object detection. The color-based and edge-weighted correlation loss functions are introduced to ensure consistency in the YCbCr space and emphasize structural integrity in high-gradient regions, respectively. Together, these loss components allow the fused image to retain more features from the source images by producing an application-ready fused image. Experiments conducted on two public datasets demonstrate a significant improvement in mIoU achieved by the proposed approach compared to state-of-the-art methods.
External IDs:dblp:conf/visigrapp/GuachoMVS25
Loading