3D world generation is essential for applications such as immersive content creation or autonomous driving simulation. Recent advances in 3D world generation have shown promising results; however, these methods are constrained by grid layouts and suffer from inconsistencies in object scale throughout the entire world. In this work, we introduce a novel framework, Map2World, that first enables 3D world generation conditioned on user-defined segment maps of arbitrary shapes and scales, ensuring global-scale consistency and flexibility across expansive environments. To further enhance the quality, we propose a detail enhancer network that generates fine details of the world. The detail enhancer enables the addition of fine-grained details without compromising overall scene coherence by incorporating global structure information. We design the entire pipeline to leverage strong priors from asset generators, achieving robust generalization across diverse domains, even under limited training data for scene generation. Extensive experiments demonstrate that our method significantly outperforms existing approaches in user-controllability, scale consistency, and content coherence, enabling users to generate 3D worlds under more complex conditions.
Map2World generates 3D world conditioned on user-defined segment maps of arbitrary shapes and scales, ensuring global-scale consistency and flexibility across expansive environments.
Map2World controls the scene-level scale by optimizing the initial noisy latent.
Map2World trains detail-enhancing networks that enhance the quality of the generated world.
@article{chung2026map,
author = {Jaeyoung Chung, Suyoung Lee, Jianfeng Xiang, Jiaolong Yang, and Kyoung Mu Lee},
title = {Map2World: Segment Map Conditioned Text to 3D World Generation},
journal = {arXiv},
year = {2025},
}