Generated Motion Maps

Yuta Matsuzaki, Kazushige Okayasu, Akio Nakamura, Hirokatsu Kataoka

May 27, 2017 (modified: Jun 05, 2017) CVPR 2017 BNMW Submission readers: everyone
  • Paper length: 4 page
  • Abstract: The paper presents a concept for generated motion maps to directly generate a human-specific modality such as human pose and stacked optical flow, with only one rgb-image. Although the conventional approaches have achieved a complicated estimation with a discriminative model, we find the solution with a recent generative model. The two primary contributions in this paper are as follows: (i) pro- posed approach directly generates a {human pose heatmap, stacked optical flow} from an rgb-image, (ii) we have collected a database which contains image pairs between RGB-channel and image modality (pose-based heatmap and stacked optical flow). The experimental results clearly show the effectiveness of our generative model, as well as its ability to generated motion maps.
  • Keywords: human pose heatmap, stacked optical flow, generative adversarial networks
  • Conflicts:,