AtmosArena: Benchmarking Foundation Models for Atmospheric Sciences

Published: 11 Oct 2024, Last Modified: 02 Nov 2024Neurips 2024 Workshop FM4Science PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: benchmark, foundation model, atmospheric science, climate science, climate change
TL;DR: We introduce AtmosArena, the first multi-task benchmark dedicated to foundation models in atmospheric sciences.
Abstract: Deep learning has emerged as a powerful tool for atmospheric sciences, showing significant utility across various tasks in weather and climate modeling. In line with recent progress in language and vision foundation models, there are growing efforts to scale and finetune such models for multi-task spatiotemporal reasoning. Despite promising results, existing works often evaluate their model on a small set of non-uniform tasks, which makes it hard to quantify broad generalization across diverse tasks and domains. To address this challenge, we introduce AtmosArena, the first multi-task benchmark dedicated to foundation models in atmospheric sciences. AtmosArena comprises a suite of tasks that cover a broad spectrum of applications in atmospheric physics and atmospheric chemistry. To showcase the capabilities and key features of our benchmark, we conducted extensive experiments to evaluate two state-of-the-art deep learning models, ClimaX and Stormer on AtmosArena, and compare their performance with other deep learning and traditional baselines. By providing a standardized, open-source benchmark, we aim to facilitate further advancements in the field, much like open-source benchmarks have driven the development of foundation models for language and vision.
Submission Number: 13
Loading