Keywords: Foundation model, time-domain astronomy, transformer, self-supervised learning, contrastive learning
TL;DR: We present Maven, a foundation model for supernova science, which reaches state-of-the-art performance in multiple downstream tasks.
Abstract: We present Maven, a foundation model for supernova science. Maven is trained using self-supervised contrastive learning to align photometric and spectroscopic time-series observations in a shared embedding space. The model is first pre-trained on 0.5M synthetic supernovae, and then fine-tuned on 4,702 real observations from the Zwicky Transient Facility. Maven achieves state-of-the-art performance in supernova classification and redshift estimation, demonstrating the effectiveness of its learned embeddings for multiple downstream tasks. We find that pre-training with synthetic data significantly improves model performance. Maven has been designed to address the common challenge in astrophysics of consolidating sparse information-dense data with abundant lower-quality or synthetic data. Our approach offers a scalable solution for large, unlabeled, and multimodal astronomical datasets, and paves the way for upcoming projects like the Vera C. Rubin Observatory.
Submission Number: 64
Loading