Abstract: The fast development of Large Language Models (LLMs) has made transformative applications in several fields attainable or possible. However, language models must often be more effective in specialized areas, especially health and prevention. This paper presents a novel method for fine-tuning LLMs using activity-related data to optimize them for the applicability of such models in health and wellbeing applications. We thus empirically evaluate this approach on the Cardiac Exercise Research corpus for fine-tuning the LLMs. The fine-tuning utilizes Quantized Low-Rank Adaptation (QLoRA) to ensure the models’ size remains small while maintaining high performance and accuracy to keep the semantic understanding and relevance with health-related queries. Our results in answering domain-related prompts showed an improved user satisfaction and sentiment scores, providing strong confidence in the method’s effectiveness. This study highlights the potential of domain-specific LLMs in advancing personalized healthcare. It instills a sense of optimism about the future of healthcare and the seamless integration of AI within health prevention and well-being domains.
Loading