Abstract: Facial micro-expression (ME) refers to a brief spontaneous facial movement that can reveal the genuine emotion of a person. The absence of data is a major problem for ME. Thankfully, generative deep neural network models can aid in producing desired samples. In this work, we proposed a deep learning based adaptive dual motion model (ADMM) for generating facial ME samples. A dual motion extraction (DME) module extracts robust motions from two modalities: original color images and edge-based grayscale images, with dual streams. Using edge-based grayscale images can help the method focus on learning subtle movements by eliminating the influences of noises and illumination variants. The motions extracted by the dual streams are fed into an adaptive motion fusion (AMF) module for combing the motions adaptively to generate the dense motion. Our method was trained on the CASME II, SMIC, and SAMM datasets. The evaluation and analysis of the results demonstrated the effectiveness of our method.
0 Replies
Loading