FUVT: a deep few-shot unsupervised learning-based video-to-video translation scheme using Kalman filtering and relativistic GAN
Abstract: Deep neural networks have provided promising results for the task of video-to-video translation. These schemes require a large number of training samples from both the source and target domains for producing translated video signals with high visual qualities. However, often acquiring several video signals from the target domain is difficult, as it needs various logistics and requires a considerable amount of time. Therefore, the development of the deep few-shot learning-based schemes that are able to efficiently perform the task of video-to-video translation using only a few number of samples from the target domain is crucial. In this paper, we propose a novel deep few-shot unsupervised learning-based video-to-video translation scheme, which by using the episodic learning technique generates high-quality visual signals. Further, in order to enhance the spatio-temporal consistency of the translated video signals, we incorporate a novel module in the proposed method that employs the Kalman filtering operation and relativistic generative adversarial neural networks. The results of extensive experiments show that the proposed video-to-video translation scheme significantly outperforms the state-of-the-art methods, when the number of video signal samples in the target domain is small.
External IDs:dblp:journals/sivp/RoohiEA25
Loading