We are excited to present LiteReality ✨, an automatic pipeline that converts RGB-D scans of indoor environments into graphics-ready 🏠 scenes. In these scenes, all objects are represented as high-quality meshes with PBR materials 🎨 that match their real-world appearance. The scenes also include articulated objects 🔧 and are ready to integrate into graphics pipelines for rendering 💡 and physics-based interactions 🕹️.
With high-quality meshes and PBR materials 🎨, the reconstructed scenes integrate seamlessly into rendering pipelines 🖥️. Below are examples of rendered scenes, reconstructed from either real-life scans (📷 left) or public datasets (📚 right).
🎬 Watch our explainer video now and understand everything in just 4 minutes! ⏱️💡✨
🏠 Left: Rendered 3D scene reconstruction
📸 Right: Original RGB capture
Swipe through the carousel above to see more examples. 🔄
With input RGB-D scans, the process begins with scene perception and parsing, where room layouts and 3D object bounding box are detected and parsed into a structured and physically plausible arrangement with the help of a scene graph. In the object reconstruction stage, identical clustering first finds identical objects, and a hierarchical retrieval approach is conducted to match 3D models retrieved from the LiteReality database. The material painting stage retrieves and optimizes PBR materials by referencing observed images. Finally, procedural reconstruction assembles all the information into a graphics-ready environment that features realism and interactivity.
One of the key challenges in the pipeline—which prior work often struggles with—is reliably recovering PBR materials at scale. While single-image methods perform well on clear views, they frequently fail on room-level scans under conditions of occlusion, poor lighting, and geometric misalignment between retrieved objects and input images. To address these limitations, we introduce an MLLM-based retrieval & optimization framework for robust, scalable material recovery.
👇 Examples:
Creating interactive, graphics-ready scenes unlocks a wide range of application scenarios. Here, we showcase a few examples, including flexible relighting, physics-based interactions, and object-level manipulation. These capabilities open the door to even more applications, such as VR/AR experiences, robotics simulation, interior design, and digital twin systems.
At present, LiteReality aims for typical indoor environments without extreme design variations. To generalize to more diverse scenes, several key limitations need to be addressed:
@misc{huang2025literealitygraphicsready3dscene,
title={LiteReality: Graphics-Ready 3D Scene Reconstruction from RGB-D Scans},
author={Zhening Huang and Xiaoyang Wu and Fangcheng Zhong and Hengshuang Zhao and Matthias Nießner and Joan Lasenby},
year={2025},
eprint={2507.02861},
url={https://arxiv.org/abs/2507.02861}
}