Keywords: Alzheimer’s disease (AD), socially assistive robotics, human robot interaction, cognitive assessment, Explainable AI, large language models
TL;DR: MEMOR-E is a privacy-preserving, explainable mobile quadruped robot that leverages long-context language models and XAI to produce stage-aware, non-diagnostic cognitive summaries from speech for personalized Alzheimer’s assistive care.
Abstract: Alzheimer’s disease is a neurodegenerative disorder characterized by progressive declines in memory and language that reduce independence in daily life, motivating the development of socially assistive robotic support. This paper presents MEMOR-E, a mobile quadruped robot equipped with an interactive tablet interface designed to assist patients and caregivers through medication reminders, routine guidance, memory-oriented interactions, and companionship. We evaluate the feasibility of fine-tuning large language models (LLMs) to emulate stage-consistent cognitive behavior and interpret responses across standard neuropsychological language tasks using audio transcriptions from 235 Alzheimer’s patients along with synthetically generated healthy controls. Additionally, we investigate the use of in-context learning (ICL), where a second LLM generates domain- and severity-level cognitive error summaries. Our results show that MEMOR-E can produce stage-aware, non-diagnostic cognitive summaries that support personalized assistive interactions, while explainable AI mechanisms translate model outputs into transparent, human-readable evidence to enable caregiver oversight and trustworthy human–robot interaction.
Submission Number: 10
Loading