Subject-Invariant Normalization: A Simple Principle for Robust Sequence Modeling

ICLR 2026 Conference Submission25072 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: subject-invariant learning, calibration-free models, fixation depth estimation, eye tracking, invariant normalization, cross-dataset generalization, spatiotemporal sequence modeling, robustness, LSTM, TCN, Transformer, deep learning, extended reality (XR), human-computer interaction
TL;DR: We introduce FOVAL, a calibration-free framework that uses subject-invariant normalization to robustly estimate fixation depth across users, devices, and datasets.
Abstract: Accurately estimating fixation depth from gaze signals is essential for applications in extended reality, robotics, and human-computer interaction. However, existing methods rely heavily on subject-specific calibration and dataset-specific preprocessing, limiting their generalization. We introduce FOVAL, a calibration-free framework for fixation depth estimation that combines spatiotemporal sequence models with a novel subject-invariant normalization strategy. Unlike prior work, FOVAL prevents train-test leakage by enforcing train-only normalization and leverages cross-dataset evaluation across three heterogeneous benchmarks (Robust Vision, Tufts Gaze Depth, Gaze-in-the-Wild). We further provide rigorous statistical testing (bootstrap confidence intervals, Wilcoxon tests, effect sizes) and noise robustness analysis to quantify stability under realistic perturbations. Empirically, FOVAL consistently outperforms alternative architectures (Transformers, TCNs, 1D-CNNs, GRUs) and prior baselines, reducing mean absolute error by up to 20% in cross-dataset scenarios. Our results demonstrate that subject-invariant normalization is a simple yet powerful principle for robust gaze-based depth estimation, with implications for broader subject-independent sequence modeling tasks.
Primary Area: applications to neuroscience & cognitive science
Submission Number: 25072
Loading