Abstract: Mobile Health (mHealth) applications rely on supervised Machine Learning (ML) algorithms, requiring end-user-labeled data for the training phase. The gold standard for obtaining such labeled data is by sending queries to users and gathering responses for the corresponding label, which was conventionally done through triggering questions sent at random. Active Learning (AL) methods use intelligent query-sending policies by incorporating users' contextual information to maximize the response rate and informativeness of the collected labeled data. However, wearable devices' substantial battery drainage associated with the sensing of physiological signals underscores the need for developing an efficient sensing policy in addition to a query-sending policy. In this work, we present a co-optimization framework for both sensing and querying strategies within wearable devices, leveraging contextual information and ML model's prediction confidence. We designed a Reinforcement Learning (RL) agent to quantify different contextual parameters combined with model confidence to determine sensing and querying decisions. Our evaluation of an exemplar stress monitoring application showed a 76% reduction in sensing and data transmission energy consumption, with only a 6% drop in user-labeled data.
Loading