Spatiotemporal Skip Guidance For Enhanced Video Diffusion Sampling | Awesome LLM Papers Add your paper to Awesome LLM Papers

Spatiotemporal Skip Guidance For Enhanced Video Diffusion Sampling

Junha Hyung, Kinam Kim, Susung Hong, Min-Jung Kim, Jaegul Choo . No Venue 2024

[Other] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Has Code Interdisciplinary Approaches Model Architecture Training Techniques

Diffusion models have emerged as a powerful tool for generating high-quality images, videos, and 3D content. While sampling guidance techniques like CFG improve quality, they reduce diversity and motion. Autoguidance mitigates these issues but demands extra weak model training, limiting its practicality for large-scale models. In this work, we introduce Spatiotemporal Skip Guidance (STG), a simple training-free sampling guidance method for enhancing transformer-based video diffusion models. STG employs an implicit weak model via self-perturbation, avoiding the need for external models or additional training. By selectively skipping spatiotemporal layers, STG produces an aligned, degraded version of the original model to boost sample quality without compromising diversity or dynamic degree. Our contributions include: (1) introducing STG as an efficient, high-performing guidance technique for video diffusion models, (2) eliminating the need for auxiliary models by simulating a weak model through layer skipping, and (3) ensuring quality-enhanced guidance without compromising sample diversity or dynamics unlike CFG. For additional results, visit https://junhahyung.github.io/STGuidance.

Similar Work