Learning Dynamic GMM for Attention Distribution on Single-face VideosDownload PDF

Yun Ren, Zulin Wang, Mai Xu, Haoyu Dong, Shengxi Li

28 Mar 2024 (modified: 06 Apr 2017)CVPR 2017 BNMW SubmissionReaders: Everyone
Paper Length: 8 page
Abstract: The past decade has witnessed the popularity of video conferencing, such as FaceTime and Skype. In video conferencing, almost every frame has a human face. Hence, it is necessary to predict attention on face videos by saliency detection, as saliency can be used as a guidance of region-of-interest (ROI) for the content-based applications. To this end, this paper proposes a novel approach for saliency detection in single-face videos. From the data-driven perspective, we first establish an eye tracking database which contains fixations of 70 single-face videos viewed by 40 subjects. Through analysis on our database, we investigate that most attention is attracted by face in videos, and that attention distribution within a face varies with regard to face size and mouth movement. Inspired by the previous work which applies Gaussian mixture model (GMM) for face saliency detection in still images, we propose to model visual attention on face region for videos by dynamic GMM (DGMM), the variation of which relies on face size, mouth movement and facial landmarks. Then, we develop a long short-term memory (LSTM) neural network in estimating DGMM for saliency detection of single-face videos, so called LSTM-DGMM. Finally, the experimental results show that our approach outperforms other state-of-the-art approaches in saliency detection of single-face videos.
Conflicts: buaa.edu.cn
Keywords: saliency detection, face video, dynamic GMM
2 Replies

Loading