Make-it-3d: High-fidelity 3D Creation From A Single Image With Diffusion Prior | Awesome LLM Papers Add your paper to Awesome LLM Papers

Make-it-3d: High-fidelity 3D Creation From A Single Image With Diffusion Prior

Junshu Tang, Tengfei Wang, Bo Zhang, Ting Zhang, Ran Yi, Lizhuang Ma, Dong Chen . 2023 IEEE/CVF International Conference on Computer Vision (ICCV) 2023 – 149 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Applications Diffusion Processes ICCV

In this work, we investigate the problem of creating high-fidelity 3D content from only a single image. This is inherently challenging: it essentially involves estimating the underlying 3D geometry while simultaneously hallucinating unseen textures. To address this challenge, we leverage prior knowledge from a well-trained 2D diffusion model to act as 3D-aware supervision for 3D creation. Our approach, Make-It-3D, employs a two-stage optimization pipeline: the first stage optimizes a neural radiance field by incorporating constraints from the reference image at the frontal view and diffusion prior at novel views; the second stage transforms the coarse model into textured point clouds and further elevates the realism with diffusion prior while leveraging the high-quality textures from the reference image. Extensive experiments demonstrate that our method outperforms prior works by a large margin, resulting in faithful reconstructions and impressive visual quality. Our method presents the first attempt to achieve high-quality 3D creation from a single image for general objects and enables various applications such as text-to-3D creation and texture editing.

Similar Work