FLOAT: Generative Motion Latent Flow Matching For Audio-driven Talking Portrait | Awesome LLM Papers Contribute to Awesome LLM Papers

FLOAT: Generative Motion Latent Flow Matching For Audio-driven Talking Portrait

Taekyung Ki, Dongchan Min, Gyoungsu Chae . No Venue 2024

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Uncategorized

With the rapid advancement of diffusion-based generative models, portrait image animation has achieved remarkable results. However, it still faces challenges in temporally consistent video generation and fast sampling due to its iterative sampling nature. This paper presents FLOAT, an audio-driven talking portrait video generation method based on flow matching generative model. We shift the generative modeling from the pixel-based latent space to a learned motion latent space, enabling efficient design of temporally consistent motion. To achieve this, we introduce a transformer-based vector field predictor with a simple yet effective frame-wise conditioning mechanism. Additionally, our method supports speech-driven emotion enhancement, enabling a natural incorporation of expressive motions. Extensive experiments demonstrate that our method outperforms state-of-the-art audio-driven talking portrait methods in terms of visual quality, motion fidelity, and efficiency.

Similar Work