Finetuning the Geospatial Foundation model for Land Cover mapping

Published: 29 Jul 2024, Last Modified: 29 Jul 2024Fragile Earth FullPresentationEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Remote Sensing, Land Use Land Cover, Foundational models
TL;DR: We propose a model for fine-tuning a cutting-edge geospatial foundation model for Land Use Land Cover mapping. This model addresses the existing challenges in finetuning with lesser requirements of labeled data and improved accuracy.
Abstract: Land use and land cover (LULC) play pivotal roles in achieving several sustainable development goals established by United Nations member states for the social, economic, and environmental advancement of our planet and its inhabitants. Understanding LULC and its dynamics is crucial for gaining insights into the changing composition and spatial distribution of land surface features across diverse landscapes. While researchers have explored various AI/ML approaches using remote sensing images spanning several decades, existing LULC mapping techniques encounter challenges related to accuracy, the need for substantial labeled data for training, and adaptability to different geographical regions, among others. Recent advances in foundational models have gained significant traction due to their ability to alleviate labeled data scarcity issues. Therefore, in this study, we propose a fine-tuning strategy utilizing a cutting-edge geospatial foundation model jointly developed by IBM and NASA, known as Prithvi[1], to address existing challenges such as lesser requirements of labeled data and accuracy. This paper presents the results achieved using Prithvi for LULC mapping with a relatively small training dataset (565 total 224 $\times$ 224 pixel$^{2}$ images). We conduct a performance comparison of the Prithvi model with the traditional deep learning-based U-Net model and a large foundational model known as Vision Transformer (ViT). The results demonstrate that Prithvi surpasses U-Net and ViT in terms of mean Intersection over Union (IoU) across several LULC classes. [1] arXiv preprint arXiv:2310.18660, 2023.
Submission Number: 4
Loading