Adding Conditional Control To Text-to-image Diffusion Models | Awesome LLM Papers Add your paper to Awesome LLM Papers

Adding Conditional Control To Text-to-image Diffusion Models

Lvmin Zhang, Anyi Rao, Maneesh Agrawala . 2023 IEEE/CVF International Conference on Computer Vision (ICCV) 2023 – 2580 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Diffusion Processes Fine Tuning ICCV

We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.

Similar Work