JEN-1: Text-guided Universal Music Generation With Omnidirectional Diffusion Models | Awesome LLM Papers Add your paper to Awesome LLM Papers

JEN-1: Text-guided Universal Music Generation With Omnidirectional Diffusion Models

Peike Li, Boyu Chen, Yao Yao, Yikai Wang, Allen Wang, Alex Wang . No Venue 2023

[Other] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Efficiency In Context Learning Interactive Environments Interdisciplinary Approaches Productivity Enhancement Training Techniques Variational Autoencoders

Music generation has attracted growing interest with the advancement of deep generative models. However, generating music conditioned on textual descriptions, known as text-to-music, remains challenging due to the complexity of musical structures and high sampling rate requirements. Despite the task’s significance, prevailing generative models exhibit limitations in music quality, computational efficiency, and generalization. This paper introduces JEN-1, a universal high-fidelity model for text-to-music generation. JEN-1 is a diffusion model incorporating both autoregressive and non-autoregressive training. Through in-context learning, JEN-1 performs various generation tasks including text-guided music generation, music inpainting, and continuation. Evaluations demonstrate JEN-1’s superior performance over state-of-the-art methods in text-music alignment and music quality while maintaining computational efficiency. Our demos are available at http://futureverse.com/research/jen/demos/jen1

Similar Work