Chain-of-zoom: Extreme Super-resolution Via Scale Autoregression And Preference Alignment | Awesome LLM Papers Contribute to Awesome LLM Papers

Chain-of-zoom: Extreme Super-resolution Via Scale Autoregression And Preference Alignment

Bryan Sangwoo Kim, Jeongsol Kim, Jong Chul Ye . No Venue 2025

[Paper] [Other] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Efficiency Has Code Prompting Reinforcement Learning Tools Training Techniques

Modern single-image super-resolution (SISR) models deliver photo-realistic results at the scale factors on which they are trained, but collapse when asked to magnify far beyond that regime. We address this scalability bottleneck with Chain-of-Zoom (CoZ), a model-agnostic framework that factorizes SISR into an autoregressive chain of intermediate scale-states with multi-scale-aware prompts. CoZ repeatedly re-uses a backbone SR model, decomposing the conditional probability into tractable sub-problems to achieve extreme resolutions without additional training. Because visual cues diminish at high magnifications, we augment each zoom step with multi-scale-aware text prompts generated by a vision-language model (VLM). The prompt extractor itself is fine-tuned using Generalized Reward Policy Optimization (GRPO) with a critic VLM, aligning text guidance towards human preference. Experiments show that a standard 4x diffusion SR model wrapped in CoZ attains beyond 256x enlargement with high perceptual quality and fidelity. Project Page: https://bryanswkim.github.io/chain-of-zoom/ .

https://huggingface.co/discussions/paper/6837fe7864391bba7e47779b

Similar Work