Abstract: Microexpressions are hard to spot due to fleeting and involuntary moments of facial muscles. Interpretation of microemotions from video clips is a challenging task. In this article, we propose an affective-motion imaging that cumulates rapid and short-lived variational information of microexpressions into a single response. Moreover, we have proposed an AffectiveNet: Affective-motion feature learning network that can perceive subtle changes and learns the most discriminative dynamic features to describe the emotion classes. The AffectiveNet holds two blocks: MICRoFeat and MFL block. MICRoFeat block conserves the scale-invariant features, which allows network to capture both coarse and tiny edge variations. Whereas, the MFL block learns microlevel dynamic variations from two different intermediate convolutional layers. Effectiveness of the proposed network is tested over four datasets by using two experimental setups: person independent and cross dataset validation. The experimental results of the proposed network outperform the state-of-the-art approaches with significant margin for MER approaches.
Loading