Abstract: For people with upper extremity motor impairments, interaction with mobile devices is challenging because it relies on the use of the touchscreen. Existing assistive solutions replace inaccessible touchscreen interactions with sequences of simpler and accessible ones. However, the resulting sequence takes longer to perform than the original interaction, and therefore it is unsuitable for mobile video games. In this paper, we expand our prior work on accessible interaction substitutions for video games with a new interaction modality: using facial gestures. Our approach allows users to play existing mobile video games using custom facial gestures. The gestures are defined by each user according to their own needs, and the system is trained with a small number of face gesture samples collected from the user. The recorded gestures are then mapped to the touchscreen interactions required to play a target game. Each interaction corresponds to a single face gesture, making this approach suitable for the interaction with video games. We describe the facial gesture recognition pipeline, motivating the implementation choices through preliminary experiments conducted on example videos of face gestures collected by one user without impairments. Preliminary results show that an accurate classification of facial gestures (\(97\%\)) is possible even with as few as 5 samples of the user.
Loading