Self-supervised Synthetic Pretraining for Inference of Stellar Mass Embedded in Dense Gas

ICLR 2026 Conference Submission18998 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: synthetic pretraining, representation learning, fluid simulations
TL;DR: Pretraining a vision transformer on synthetic fractal images using self-supervised learning can mitigate the lack of labeled data, thereby improving zero-shot tasks for scientific simulation data.
Abstract: Stellar mass is a fundamental quantity that determines the properties and evolution of stars. However, estimating stellar masses in star-forming regions is challenging because young stars are obscured by dense gas and the regions are highly inhomogeneous, making spherical dynamical estimates unreliable. Supervised machine learning could link such complex structures to stellar mass, but it requires large, high-quality labeled datasets from high-resolution magneto-hydrodynamical (MHD) simulations, which are computationally expensive. We address this by pretraining a vision transformer on one million synthetic fractal images using the self-supervised framework DINOv2, and then applying the frozen model to limited high-resolution MHD simulations. Our results demonstrate that synthetic pretraining improves frozen-feature stellar mass predictions, with the pretrained model performing slightly better than a supervised model trained on the same limited simulations. Principal component analysis of the extracted features further reveals semantically meaningful structures, suggesting that the model enables unsupervised segmentation of star-forming regions without the need for labeled data or lightweight fine-tuning.
Primary Area: applications to physical sciences (physics, chemistry, biology, etc.)
Submission Number: 18998
Loading