Investigating the Benefits of Foundation Models for Mars Science

Published: 22 Jul 2024, Last Modified: 19 May 2025Tenth International Conference on Mars 2024 (LPI Contrib. No. 3007)EveryoneCC BY 4.0
Abstract: Planetary science increasingly relies on automated analysis of large volumes of orbital data to overcome the limitations of manual interpretation. This study investigates the application of foundation models—large neural networks pre-trained on diverse datasets—to Mars science tasks using orbital imagery. We evaluate various pre-training strategies, including zero pre-training, supervised pre-training with ImageNet and DoMars16, and self-supervised pre-training using Context Camera (CTX) data, on two downstream tasks: HiRISE Landmark Classification and Martian Frost Detection. Our experiments compare Vision Transformer (ViT) and Inception architectures and demonstrate that pre-trained models significantly outperform those without pre-training. Notably, ViT models pre-trained on ImageNet exhibit superior performance, despite ImageNet being out-of-domain, and self-supervised CTX pre-training yields comparable results with substantially less data. These findings highlight the potential of foundation models and self-supervised learning for scalable, domain-adapted analysis in Martian science, with implications for future tasks such as crater counting and water detection.
Loading