View-LSTM: Novel-View Video Synthesis Through View DecompositionOpen Website

13 Nov 2022OpenReview Archive Direct UploadReaders: Everyone
Abstract: We tackle the problem of synthesizing a video of multi- ple moving people as seen from a novel view, given only an input video and depth information or human poses of the novel view as prior. This problem requires a model that learns to transform input features into target features while maintaining temporal consistency. To this end, we learn an invariant feature from the input video that is shared across all viewpoints of the same scene and a view-dependent fea- ture obtained using the target priors. The proposed ap- proach, View-LSTM, is a recurrent neural network struc- ture that accounts for the temporal consistency and target feature approximation constraints. We validate View-LSTM by designing an end-to-end generator for novel-view video synthesis. Experiments on a large multi-view action recog- nition dataset validate the proposed model
0 Replies

Loading