Faces `a La Carte: Text-to-face Generation Via Attribute Disentanglement | Awesome LLM Papers Add your paper to Awesome LLM Papers

Faces \`a La Carte: Text-to-face Generation Via Attribute Disentanglement

Tianren Wang, Teng Zhang, Brian Lovell . 2021 IEEE Winter Conference on Applications of Computer Vision (WACV) 2021 – 42 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
3d Representation Applications Fine Tuning Image Text Integration Interactive Environments Multimodal Semantic Representation Visual Contextualization

Text-to-Face (TTF) synthesis is a challenging task with great potential for diverse computer vision applications. Compared to Text-to-Image (TTI) synthesis tasks, the textual description of faces can be much more complicated and detailed due to the variety of facial attributes and the parsing of high dimensional abstract natural language. In this paper, we propose a Text-to-Face model that not only produces images in high resolution (1024x1024) with text-to-image consistency, but also outputs multiple diverse faces to cover a wide range of unspecified facial features in a natural way. By fine-tuning the multi-label classifier and image encoder, our model obtains the vectors and image embeddings which are used to transform the input noise vector sampled from the normal distribution. Afterwards, the transformed noise vector is fed into a pre-trained high-resolution image generator to produce a set of faces with the desired facial attributes. We refer to our model as TTF-HD. Experimental results show that TTF-HD generates high-quality faces with state-of-the-art performance.

Similar Work