Abstract: Automatic pain detection is an important problem in diagnostic and therapeutic applications. In this paper, we aim to develop a computational framework to automatically detect pain in videos in the wild. The videos in the wild vary with respect to gender, age, ethnicity and even other qualitative attributes like upbringing. Previous systems focused on methodologies confined to one particular dataset that is hard to generalize for the population in the wild, or based on invasive methods that collect data using many physiological sensors and induced stressors. We propose a method to automatically detect pain in videos using state-of-the-art expression recognition system along with deep learning. We curated a dataset of 194 videos in the wild with pain and non-pain. We used a sliding window strategy to obtain a fixed-length input sample for the LSTM (Long Short Term Memory) network. We then carefully concatenate the network output of every segment to generate a video-level output. The proposed end-to-end framework can predict binary classification label (pain/non-pain) at video level. Our method achieves promising results on the dataset we collected.
0 Replies
Loading