3D Reconstruction of Indoor Scenes with a Generative Framework
🌐 Project Page 📄 Paper (Arxiv)
We propose single-image 3D scene reconstruction for producing complete, editable scenes from a single photograph. Our method reconstructs individual objects and the surrounding background as textured 3D assets, enabling coherent scene assembly from minimal input. We combine instance segmentation, context-aware generative inpainting, 2D-to-3D asset creation, and constrained optimization to recover physically plausible geometry, materials, and lighting. The resulting scenes preserve correct spatial relationships, lighting consistency, and material fidelity, making them suitable for production-ready workflows.
See INSTALLATION.md for setup instructions.
Quick start:
git clone --recursive https://github.com/cgtuebingen/3D-RE-GEN.git
cd 3D-RE-GEN
cd segmentor && wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth && cd ..
mamba run -p ./venv_py310 python run.py -p 1 2 3 4 5 6 7 8 9The source code provided in this repository is released under the MIT License.
** Important Note on Third-Party Assets:** This project integrates several third-party models and libraries—including VGGT, Segment Anything Model (SAM), Hunyuan3D-2.0, and Grounded-SAM—which are governed by their own separate licenses.
While our code is open source, the weights and underlying code for these external models may come with stricter restrictions (e.g., non-commercial use, research-only, or attribution requirements). Users are responsible for reviewing and adhering to the specific licensing terms of each component before use.
You can find our paper on arXiv, please consider citing, if you find this work useful:
@inproceeding{sautter20253dregen,
author ={Sautter, Tobias and Dihlmann, Jan-Niklas and Lensch, Hendrik P.A.},
title ={3D-RE-GEN: 3D Reconstruction of Indoor Scenes with a Generative Framework},
booktitle ={arXiv preprint},
year ={2025}
}