The Bearable Lightness of Big Data: Towards Massive Public Datasets in Scientific Machine LearningDownload PDF

Published: 15 Jun 2022, Last Modified: 22 Oct 2023ICML-AI4Science PosterReaders: Everyone
Keywords: Big data, Deep learning, Lossy compression, High performance computing, Turbulent reacting flows
TL;DR: We propose a realistic framework for curating and sharing big data in scientific machine learning.
Abstract: In general, large datasets enable deep learning models to perform with good accuracy and generalizability. However, massive high-fidelity simulation datasets (from molecular chemistry, astrophysics, computational fluid dynamics (CFD), etc.) can be challenging to curate due to dimensionality and storage constraints. Lossy compression algorithms can help mitigate limitations from storage, as long as the overall data fidelity is preserved. To illustrate this point, we demonstrate that deep learning models, trained and tested on data from a petascale CFD simulation, are robust to errors introduced during lossy compression in a semantic segmentation problem. Our results demonstrate that lossy compression algorithms offer a realistic pathway for exposing high-fidelity scientific data to open-source data repositories for building community datasets. In this paper, we outline, construct, and evaluate the requirements for establishing a big data framework, demonstrated at, for scientific machine learning.
Track: Original Research Track
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](
0 Replies