Deep Learning for Automated Localization of Gaze Points and Climbing Holds

Published: 01 Jan 2024, Last Modified: 20 May 2025IPTA 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Points of gaze (PoGs) and motor behaviors impact sport climbing performance. A large dataset of global PoGs and climbing holds (CHs) is needed. Recent eye-tracking devices capture only local views, leading to time-consuming global localization. This study aims to automate global PoG and CH computation. A wireless eye-tracking device records PoGs and CHs during climbs. Artificial landmarks aid in mapping to global space. A CNN-based framework detects and classifies landmarks. Local PoGs and CHs are transformed globally using a homography transform. Cross-validation assessed the method's success rates and accuracies. The optimal framework computed global PoGs and CHs for 2,460 climbing cases. CH success rates were 80.90% ± 13.98%, with mean Euclidean distance errors of 0.0239 ± 0.0216 m. PoG success rates were 80.79% ± 10.74%. Processing time per frame averaged 115.14 ± 6.80 ms. The datasets will analyze gaze behaviors' effects on climbing outcomes and inform a decision-support system for sport climbing.
Loading