Opti-Speech-VMT: Implementation and Evaluation

Published: 01 Jan 2021, Last Modified: 08 Nov 2024BODYNETS 2021EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We describe Opti-Speech-VMT, a prototype tongue tracking system that uses electromagnetic articulography to permit visual feedback during oral movements.Opti-Speech-VMT is specialized for visuomotor tracking (VMT) experiments in which participants follow an oscillating virtual target in the oral cavity using a tongue sensor. The algorithms for linear, curved, and custom trajectories are outlined, and new functionality is briefly presented. Because latency can potentially affect accuracy in VMT tasks, we examined system latency at both the API and total framework levels. Using a video camera, we compared the movement of a sensor (placed on an experimenter’s finger) against an oscillating target displayed on a computer monitor. The average total latency was 87.3 ms, with 69.8 ms attributable to the API, and 17.4 ms to Opti-Speech-VMT. These results indicate minimal reduction in performance due to Opti-Speech-VMT, and suggest the importance of the EMA hardware and signal processing optimizations used.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview