Editorial: Recent advances in image fusion and quality improvement for cyber-physical systems, volume II

Published: 01 Jan 2024, Last Modified: 13 Nov 2024Frontiers Neurorobotics 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Multi-source visual information fusion and quality improvement can help the robotic system to perceive the real world. Image fusion is a computational technique fusing multisource images from multiple sensors into a synthesized image that provides a comprehensive or reliable description. Quality improvement techniques can be used to address the challenge of low-quality image analysis tasks [1][2][3][4][5][6]. At present, a lot of brain-inspired algorithm methods (or models) are aggressively proposed to accomplish these two tasks, and the artificial neural network has become one of the most popular techniques in processing image fusion and quality improvement techniques in this decade, especially deep convolutional neural networks [4][5][6][7][8]. This is an exciting research field for the research community of image fusion, and many interesting issues remain to be explored, such as deep few-shot learning, unsupervised learning, application of embodied neural systems, and industrial applications.How to develop a sound biological neural network and embedded system to extract the multiple features of source images are two key questions that need to be addressed in the fields of image fusion and quality improvement. Hence, studies in this field can be divided into two aspects: new end-to-end neural network models for merging constituent parts during the image fusion process and the embodiment of artificial neural networks for image processing systems. In addition, current booming techniques, including deep neural systems and embodied artificial intelligence systems, are considered potential future trends for reinforcing image fusion performance and quality improvement.The paper of Zhang et al. introduces a palmprint recognition method based on a gating mechanism and adaptive feature fusion. They propose a new network structure, GLGAnet, for extracting local and global features of palmprints. The method incorporates a gating mechanism to control features extracted by deep convolutional layers and Transformer modules, along with an adaptive convolution fusion module for multi-level feature fusion. Experimental results demonstrate that their method outperforms existing approaches on two datasets.Many previous works overlooked the crucial support-query set interaction and the deeper information that needs to be explored. Zeng et al. propose a duplex network model utilizing the suppression and focus concept to address this issue. Their network includes dynamic convolution, prototype matching structure, and a hybrid attention module called DAAConv. The DPMCN model demonstrates superior performance over traditional prototype-based methods in dataset experiments.In the third work, Peng et al. proposed a network structure called Context-Aware Lightweight Super-Resolution Network, which enhances the resolution of remote sensing images. This network combines local and global features and includes a Dynamic Weight Generation Branch to improve image quality while maintaining computational efficiency. Compared to existing methods, the proposed approach can reconstruct highquality images at a lower cost.In the fourth study, titled "Feature fusion network based on few-shot fine-grained classification," Yang et al. introduced the Feature Fusion Similarity Network (FFSNet). This model employs global measures to accentuate the differences between classes while utilizing local measures to consolidate intra-class data, greatly enhancing the model's generalization ability. The method proposed in this paper has been validated to be effective.In
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview