Multiresolution Multimodal Sensor Fusion for Remote Sensing Data With Label UncertaintyDownload PDFOpen Website

2020 (modified: 03 Nov 2022)IEEE Trans. Geosci. Remote. Sens. 2020Readers: Everyone
Abstract: In remote sensing, each sensor can provide complementary or reinforcing information. It is valuable to fuse outputs from multiple sensors to boost overall performance. Previous supervised fusion methods often require accurate labels for each pixel in the training data. However, in many remote-sensing applications, pixel-level labels are difficult or infeasible to obtain. In addition, outputs from multiple sensors often have different resolutions or modalities. For example, rasterized hyperspectral imagery (HSI) presents data in a pixel grid while airborne light detection and ranging (LiDAR) generates dense 3-D point clouds. It is often difficult to directly fuse such multimodal, multiresolution data. To address these challenges, we present a novel multiple instance multiresolution fusion (MIMRF) framework that can fuse multiresolution and multimodal sensor outputs while learning from automatically generated, imprecisely labeled data. Experiments were conducted on the MUUFL Gulfport HSI and LiDAR data set and a remotely sensed soybean and weed data set. Results show improved, consistent performance on scene understanding and agricultural applications when compared to traditional fusion methods.
0 Replies

Loading