VGPNN: Diverse Generation from a Single Video Made Possible

Weizmann Institute of Science, Rehovot, Israel
* Equal contribution
(This page contains many videos, please give it a minute to load)

Generated Video Samples from a Single Video

Original video   (top-left), all others are generated

Abstract

Most advanced video generation and manipulation methods train on a large collection of videos. As such, they are restricted to the types of video dynamics they train on. To overcome this limitation, GANs trained on a single video were recently proposed. While these provide more flexibility to a wide variety of video dynamics, they require days to train on a single tiny input video, rendering them impractical.

In this paper we present a fast and practical method for video generation and manipulation from a single natural video, which generates diverse high-quality video outputs within seconds (for benchmark videos). Our method can be further applied to full-HD video clips within minutes. Our approach is inspired by a recent advanced patch-nearest-neighbor based approach [Granot et al., 2021], which was shown to significantly outperform single-image GANs, both in run-time and in visual quality.

Here we generalize this approach from images to videos, by casting classical space-time patch-based methods as a new generative video model. We adapt the generative image patch nearest neighbor approach to efficiently cope with the huge number of space-time patches in a single video. Our method generates more realistic and higher quality results than single-video GANs (confirmed by quantitative and qualitative evaluations). Moreover, it is disproportionally faster (runtime reduced from several days to seconds). Other than diverse video generation, we demonstrate several other challenging video applications, including spatio-temporal video retargeting (e.g., video extension & video summarization), video structural analogies and conditional video-inpainting.

Video Structural Analogies

We can use our method to perform video structural analogies (or "video style transfer"), by transfering the motion from a content video into the patch-distribution of the style video.

And here are the results of our video structural analogies using the same style video as above with different content videos (smooth transitions of mnist digits from one to the next):


Retargeting over Spatial Dimension


Retargeting over Temporal Dimension

Legend:   Original video (top);   Extended or   Summarized (bottom)


Conditional Inpainting

We can add/remove parts of the video by marking the relevant region. The color will be used to guide our method to replace it with an object from the video with similar color.

BibTeX

@article{haim2021vgpnn,
  author    = {Haim, Niv and Feinstein, Ben and Granot, Niv and Shocher, Assaf and Bagon, Shai and Dekel, Tali and Irani, Michal},
  title     = {Diverse Generation from a Single Video Made Possible},
  journal   = {arXiv preprint arXiv:2109.08591},
  year      = {2021},
}