Sharegpt-4o-image: Aligning Multimodal Models With Gpt-4o-level Image Generation | Awesome LLM Papers Add your paper to Awesome LLM Papers

Sharegpt-4o-image: Aligning Multimodal Models With Gpt-4o-level Image Generation

Junying Chen, Zhenyang Cai, Pengcheng Chen, Shunian Chen, Ke Ji, Xidong Wang, Yunjin Yang, Benyou Wang . No Venue 2025

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Image Text Integration Interactive Environments Interdisciplinary Approaches Model Architecture Training Techniques Variational Autoencoders Visual Contextualization

Recent advances in multimodal generative models have unlocked photorealistic, instruction-aligned image generation, yet leading systems like GPT-4o-Image remain proprietary and inaccessible. To democratize these capabilities, we present ShareGPT-4o-Image, the first dataset comprising 45K text-to-image and 46K text-and-image-to-image data, all synthesized using GPT-4o’s image generation capabilities for distilling its advanced image generation abilities. Leveraging this dataset, we develop Janus-4o, a multimodal large language model capable of both text-to-image and text-and-image-to-image generation. Janus-4o not only significantly improves text-to-image generation over its predecessor, Janus-Pro, but also newly supports text-and-image-to-image generation. Notably, it achieves impressive performance in text-and-image-to-image generation from scratch, using only 91K synthetic samples and 6 hours of training on an 8 A800-GPU machine. We hope the release of ShareGPT-4o-Image and Janus-4o will foster open research in photorealistic, instruction-aligned image generation.

Similar Work