LLaSA: Large Multimodal Agent for Human Activity Analysis Through Wearable Sensors

ACL ARR 2024 June Submission4187 Authors

16 Jun 2024 (modified: 05 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Integrating inertial measurement units (IMUs) with large language models (LLMs) advances multimodal AI by enhancing human activity understanding. We introduce SensorCaps, a dataset of 26,288 IMU-derived activity narrations, and OpenSQA, an instruction-following dataset with 257,562 question-answer pairs. Combining LIMU-BERT and LLaMA, we develop LLaSA, a Large Multimodal Agent capable of interpreting and responding to activity and motion analysis queries. Our evaluation demonstrates LLaSA’s effectiveness in activity classification and question answering, highlighting its potential in healthcare, sports science, and human-computer interaction. These contributions advance sensor-aware language models and open new research avenues.
Paper Type: Short
Research Area: Question Answering
Research Area Keywords: multimodal QA, question generation
Contribution Types: NLP engineering experiment, Data resources
Languages Studied: English
Submission Number: 4187
Loading