A Cross-modality Deep Learning Method for Measuring Decision Confidence from Eye Movement Signals

Published: 01 Jan 2022, Last Modified: 28 Apr 2025EMBC 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Electroencephalography (EEG) signals can effectively measure the level of human decision confidence. However, it is difficult to acquire EEG signals in practice due to the ex-pensive cost and complex operation, while eye movement signals are much easier to acquire and process. To tackle this problem, we propose a cross-modality deep learning method based on deep canoncial correlation analysis (CDCCA) to transform each modality separately and coordinate different modalities into a hyperspace by using specific canonical correlation analysis constraints. In our proposed method, only eye movement signals are used as inputs in the test phase and the knowledge from EEG signals is learned in the training stage. Experimental results on two human decision confidence datasets demonstrate that our proposed method achieves advanced performance compared with the existing single-modal approaches trained and tested on eye movement signals and maintains a competitive accuracy in comparison with multimodal models.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview