Towards LLMs for Sensor Data: Multi-Task Self-Supervised Learning

Published: 01 Jan 2023, Last Modified: 01 Oct 2024UbiComp/ISWC Adjunct 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: LLMs for vision and NLP domain has been popular by the widespread use of ChatGPT and GPT-4. This paper tackles to build LLMs for sensor domain of one-dimensional signals whose downstream task is activity recognition and emotion detection. We propose a new architecture of Transformer-based self-supervised learner which we name SENvT. This SENvT builds the LLMs for sensor data using 7 pretext objectives in multi-task learning together with contrastive learning. Experimental results show these three. First, we obtained better results for contrastive learning and the masked token task but not for other pretext tasks. Second, the masked token task was better in 60% rather than in 10%. Third, the RGW worked best in accuracy while the masked token task worked best in F1.
Loading