Towards Large-scale Clinical Multi-variate Time-series Datasets

Published: 10 Oct 2024, Last Modified: 26 Nov 2024NeurIPS 2024 TSALM WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Time-Series, Electronic Health Records, Large-Scale Datasets, Data Harmonization, Benchmark, Transfer, Out-of-Distribution
TL;DR: We carefully aggregated a large-collection of clinical time-series datasets to create and benchmark the biggest of its kind to date
Abstract: Notable progress has been made in generalist medical Large Language Models (LLMs) across various healthcare areas. However, large-scale modeling of in-hospital time series data—such as vital signs, lab results, and treatments in Intensive Care Units (ICUs)—remains underexplored. Existing ICU datasets are relatively small, but combining them can enhance patient diversity and improve model robustness. To generalize across hospitals, models must also address distribution shifts caused by varying treatment policies, which requires harmonization of treatment variables across datasets. This work aims to establish a foundation for training large-scale multi-variate time series models on critical care data and to provide a benchmark for machine learning models in transfer learning across hospitals to study and address distribution shift challenges. We introduce a harmonized dataset for research in sequence modeling and transfer learning, representing the first large-scale collection to include core treatment variables. Future plans involve expanding this dataset to further support advancements in transfer learning and the development of scalable, generalizable models for critical healthcare applications.
Submission Number: 98
Loading