Intelligent Pooling in Thompson Sampling for Rapid Personalization in Mobile HealthDownload PDF

30 Apr 2019 (modified: 05 May 2023)RL4RealLife 2019Readers: Everyone
Keywords: mobile health, Thompson sampling, Bayesian random effects models
Abstract: Mobile health (mHealth) applications can provide users with essential and timely feedback. From physical activity suggestions, to stress-reduction techniques, mHealth can provide a wide spectrum of effective treatments. Personalizing these interventions might vastly improve their effectiveness, as individuals vary widely in their response to treatment. An optimal mHealth policy must address the question of when to intervene, even as this question is likely to differ between individuals. The high amount of noise due to the in situ delivery of mHealth interventions can cripple the learning rate when a policy only has access to a single user’s data. When there is limited time to engage users, a slow learning rate can pose problems, potentially raising the risk that users leave a study. To speed up learning an optimal policy for each user, we propose learning personalized policies via intelligent use of other users’ data. The proposed learning algorithm allows us to pool information from other users in a principled, adaptive manner. The algorithm combines Thompson sampling with a Bayesian random effects model for the reward function. We use the data collected from a real-world mobile health study to build a generative model and evaluate the proposed algorithm in comparison with two natural alternatives: learning the treatment policy separately per person and learning a single treatment policy for all people. This work is motivated by our preparations for a real-world followup study in which the proposed algorithm will be used on a subset of the participants.
0 Replies

Loading