Abstract: As data-driven analysis methods powered by artificial intelligence have matured, research on visual attention prediction has advanced markedly. However, gaze-point data from eye-tracking devices are often characterized by high noise levels, limiting accuracy in representing real driver behavior. Thus, we proposed the Differential 2D Gaussian Ellipse (D2DGE) representation, which captures gaze distribution within a time window to reduce noise from devices or unconscious glances. To validate D2DGE, a Generative Adversarial Imitation Learning model was trained on both raw gaze data and the D2DGE data, and we assessed the similarity between the raw and generated data using the Kullback–Leibler divergence. Then, we compared the five parameters of D2DGE representation between novice and experienced drivers. Results show that the D2DGE data can better approximate the raw data and contains richer information than raw gaze data. The findings indicate that the D2DGE can be a promising alternative to describe gaze distribution during driving.
Loading