Confirmation: I have read and agree with the IEEE BSN 2025 conference submission's policy on behalf of myself and my co-authors.
Keywords: Counterfactual explanations, Diabetes, Digital health, Explainable AI, Metabolic health, Wearable sensors, ML, Data Augmentation, class imbalance
TL;DR: LLMs for "Counterfactual Treatment Generation" and "Data Augmentation for Enhancing Model Performance"
Abstract: Counterfactual explanations (CFs) offer human-centric insights into machine learning predictions by highlighting minimal changes required to alter an outcome. Therefore, CFs can be used as (i) interventions for abnormality prevention and (ii) augmented data for training robust models. In this work, we explore large language models (LLMs), specifically GPT-4o-mini, for generating CFs in a zero-shot and three-shot setting. We evaluate our approach on two datasets: the AI-Readi flagship dataset for stress prediction and a public dataset for heart disease detection. Compared to traditional methods such as DiCE, CFNOW, and NICE, our few-shot LLM-based approach achieves high plausibility (up to 99%), strong validity (up to 0.99), and competitive sparsity. Moreover, using LLM-generated CFs as augmented samples improves downstream classifier performance (an average accuracy gain of 5%), especially in low-data regimes. This demonstrates the potential of prompt-based generative techniques to enhance explainability and robustness in clinical and physiological prediction tasks. Code base: **github.com/shovito66/SenseCF**.
Track: 12. Emerging Topics (e.g. Agentic AI, LLMs for computational health with wearables)
Tracked Changes: pdf
NominateReviewer: * Hassan Ghasemzadeh (Hassan.Ghasemzadeh@asu.edu)
* Asiful Arefeen (aarefeen@asu.edu)
Potential Reviewers:
* Ebrahim Farahmand (efarahma@asu.edu),
* Nooshin Taheri Chatrudi (ntaheric@asu.edu)
* Saman Khamesian (s.khamesian@asu.edu)
* Mohammad Nur Hossain (mkhan@wpi.edu)
Submission Number: 51
Loading