Abstract: In the work at hand, a method is presented that can predict gestures during input. The scheme is based on the specification of prominent points defining subgestures within templates. Classification of a partial input is only against a small set of subgestures pre-selected by nearest neighbor searches regarding these prominent points. The gesture prediction is invariant against variations in scale, rotation, translation and speed of an input and handles single-touch, single-stroke and (sequential) multi-touch gestures. We provide thorough investigations of the classifiers performance on tests with two medium sized gesture sets. Results are promising and feasible for a wide range of applications. Even common direct manipulation operations can be reliably detected.
0 Replies
Loading