In this paper, we investigate the application of large language models (LLMs) to zero-shot and few-shot prediction and classification of multimodal wearable sensor data. Using data from the large-scale Homekit2020 dataset, we explore health tasks including cardiac activity monitoring, metabolic health prediction, and sleep detection. We demonstrate that LLMs perform feature extraction, prediction, and classification with comparable or higher performance than classical machine learning approaches even in the zero-shot scenario. Our findings show promising results in using LLMs for wearable data analysis and interpretation.
Keywords: large language models, wearable sensors, health tasks, few-shot learning
TL;DR: Investigate the capability of large language models for wearable sensor data analysis.
Abstract:
Submission Number: 127
Loading