An Overview of Multimodal Remote Sensing Data Fusion: From Image to Feature, From Shallow to Deep

Published: 01 Jan 2021, Last Modified: 13 Nov 2024IGARSS 2021EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: With the ever-growing availability of different remote sensing (RS) products from both satellite and airborne platforms, simultaneous processing and interpretation of multimodal RS data have shown increasing significance in the RS field. Different resolutions, contexts, and sensors of multimodal RS data enable the identification and recognition of the materials lying on the earth's surface at a more accurate level by describing the same object from different points of the view. As a result, the topic on multimodal RS data fusion has gradually emerged as a hotspot research direction in recent years. This paper aims at presenting an overview of multimodal RS data fusion in several mainstream applications, which can be roughly categorized by 1) image pansharpening, 2) hyperspectral and multispectral image fusion, 3) multimodal feature learning, and (4) crossmodal feature learning. For each topic, we will briefly describe what is the to-be-addressed research problem related to multimodal RS data fusion and give the representative and state-of-the-art models from shallow to deep perspectives.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview