Revisiting Learning-based Video Motion Magnification for Real-time Processing

TMLR Paper7334 Authors

04 Feb 2026 (modified: 06 Feb 2026)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Video motion magnification is a technique to capture and amplify subtle motion in a video that is invisible to the naked eye. The deep learning-based prior work successfully models outstanding quality better than conventional signal processing-based ones. However, it still lags behind real-time performance, which prevents it from being extended to various online systems. In this paper, we revisit the first learning-based model and present experimental analyses, in particular on the identification of redundant components, the insertion of spatial bottlenecks, and the trade-off relationship between channel reduction and layer addition. By integrating the findings of each experiment, we present a real-time, deep learning-based motion magnification model that achieves a computational speed ranging from a minimum of 2.7 times to a maximum of 34.9 times faster than existing learning-based methods, while maintaining comparable generation quality to prior arts. To the best of our knowledge, this is the first learning-based motion magnification model that runs in real-time on Full-HD resolution videos even without ad hoc quantization.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Adam_W_Harley1
Submission Number: 7334
Loading