Structure And Content-guided Video Synthesis With Diffusion Models | Awesome LLM Papers Add your paper to Awesome LLM Papers

Structure And Content-guided Video Synthesis With Diffusion Models

Patrick Esser, Johnathan Chiu, Parmida Atighehchian, Jonathan Granskog, Anastasis Germanidis . 2023 IEEE/CVF International Conference on Computer Vision (ICCV) 2023 – 275 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Diffusion Processes ICCV

Text-guided generative diffusion models unlock powerful image creation and editing tools. While these have been extended to video generation, current approaches that edit the content of existing footage while retaining structure require expensive re-training for every input or rely on error-prone propagation of image edits across frames. In this work, we present a structure and content-guided video diffusion model that edits videos based on visual or textual descriptions of the desired output. Conflicts between user-provided content edits and structure representations occur due to insufficient disentanglement between the two aspects. As a solution, we show that training on monocular depth estimates with varying levels of detail provides control over structure and content fidelity. Our model is trained jointly on images and videos which also exposes explicit control of temporal consistency through a novel guidance method. Our experiments demonstrate a wide variety of successes; fine-grained control over output characteristics, customization based on a few reference images, and a strong user preference towards results by our model.

Similar Work