Multi³Net: Segmenting Flooded Buildings via Fusion of Multiresolution, Multisensor, and Multitemporal Satellite ImageryDownload PDF

30 Sept 2018 (modified: 05 May 2023)NIPS 2018 Workshop Spatiotemporal Blind SubmissionReaders: Everyone
Keywords: image segmentation, deep learning, computer vision, flood detection, disaster response, spatiotemporal data, satellite imagery
TL;DR: We present a novel approach to performing rapid segmentation of flooded buildings by fusing multiresolution, multisensor, and multitemporal satellite imagery in a convolutional neural network.
Abstract: We present a novel approach to performing rapid segmentation of flooded buildings by fusing multiresolution, multisensor, and multitemporal satellite imagery in a convolutional neural network. Our method significantly expedites the generation of satellite imagery-based flood maps, which are crucial for first responders and local authorities in the early stages of flood events. By incorporating multitemporal satellite imagery, our approach allows for a rapid and accurate post-disaster damage assessment, helping governments to better coordinate medium- and long-term financial assistance programs for affected areas. Our model consists of multiple streams of encoder-decoder architectures that extract temporal information from mediumresolution images and spatial information from high-resolution images before fusing the resulting representations into a single medium resolution segmentation map of flooded buildings. We demonstrate that our model produces highly accurate segmentation of flooded buildings using only freely available medium-resolution imagery and can be improved through very high-resolution (VHR) data.
3 Replies

Loading