Keywords: virtual reality, neck exoskeletons, gaze detection, intent recognition
TL;DR: We use VR to collect paired head-eye movement data to train an ML model to predict a patient's intended head movement conditioned on eye gaze.
Abstract: Dropped head syndrome is an issue faced by many individuals affected by neurodegenerative diseases. This makes it impossible for these people to support their own head with their neck, causes pain and discomfort, and makes it difficult to perform everyday tasks. Our long-term goal is to use a powered neck-exoskeleton to restore natural neck motion for people with dropped head syndrome. However, determining how a user would like to move their head is challenging. We propose to leverage virtual reality as a way to collect coupled eye and head movement data from healthy individuals to train a machine learning model that can predict user-intended head movement from eye-gaze alone. We present preliminary results demonstrating the potential of our learned model. We discuss our ongoing work to compare our learned model with existing, non-learning-based methods. Finally, we discuss our future plans to incorporate human-in-the-loop feedback to enable customization of an assistive robotic neck exoskeleton for users with dropped head syndrome.
Submission Number: 17
Loading