# *EmbodiedGen*: Towards a Generative 3D World Engine for Embodied Intelligence
[](https://horizonrobotics.github.io/robot_lab/embodied_gen/index.html)
[](https://arxiv.org/abs/2506.10600)
[](https://www.youtube.com/watch?v=rG4odybuJRk)
[](https://huggingface.co/spaces/HorizonRobotics/EmbodiedGen-Image-to-3D)
[](https://huggingface.co/spaces/HorizonRobotics/EmbodiedGen-Text-to-3D)
[](https://huggingface.co/spaces/HorizonRobotics/EmbodiedGen-Texture-Gen)
> ***EmbodiedGen*** is a generative engine to create diverse and interactive 3D worlds composed of high-quality 3D assets(mesh & 3DGS) with plausible physics, leveraging generative AI to address the challenges of generalization in embodied intelligence related research.
> It composed of six key modules: `Image-to-3D`, `Text-to-3D`, `Texture Generation`, `Articulated Object Generation`, `Scene Generation` and `Layout Generation`.
---
## โจ Table of Contents of EmbodiedGen
- [๐ผ๏ธ Image-to-3D](#image-to-3d)
- [๐ Text-to-3D](#text-to-3d)
- [๐จ Texture Generation](#texture-generation)
- [๐ 3D Scene Generation](#3d-scene-generation)
- [โ๏ธ Articulated Object Generation](#articulated-object-generation)
- [๐๏ธ Layout(Interactive 3D Worlds) Generation](#layout-generation)
## ๐ Quick Start
### โ
Setup Environment
```sh
git clone https://github.com/HorizonRobotics/EmbodiedGen.git
cd EmbodiedGen
git checkout v0.1.0
git submodule update --init --recursive --progress
conda create -n embodiedgen python=3.10.13 -y
conda activate embodiedgen
bash install.sh
```
### โ
Setup GPT Agent
Update the API key in file: `embodied_gen/utils/gpt_config.yaml`.
You can choose between two backends for the GPT agent:
- **`gpt-4o`** (Recommended) โ Use this if you have access to **Azure OpenAI**.
- **`qwen2.5-vl`** โ An alternative with free usage via OpenRouter, apply a free key [here](https://openrouter.ai/settings/keys) and update `api_key` in `embodied_gen/utils/gpt_config.yaml` (50 free requests per day)
---
### โ๏ธ Service
Run the image-to-3D generation service locally.
Models downloaded automatically on first run, please be patient.
```sh
# Run in foreground
python apps/image_to_3d.py
# Or run in the background
CUDA_VISIBLE_DEVICES=0 nohup python apps/image_to_3d.py > /dev/null 2>&1 &
```
### โก API
Generate physically plausible 3D assets from image input via the command-line API.
```sh
python3 embodied_gen/scripts/imageto3d.py \
--image_path apps/assets/example_image/sample_04.jpg apps/assets/example_image/sample_19.jpg \
--output_root outputs/imageto3d
# See result(.urdf/mesh.obj/mesh.glb/gs.ply) in ${output_root}/sample_xx/result
```
---
### โ๏ธ Service
Deploy the text-to-3D generation service locally.
Text-to-image based on the Kolors model, supporting Chinese and English prompts.
Models downloaded automatically on first run, see `download_kolors_weights`, please be patient.
```sh
python apps/text_to_3d.py
```
### โก API
Text-to-image based on the Kolors model.
```sh
bash embodied_gen/scripts/textto3d.sh \
--prompts "small bronze figurine of a lion" "A globe with wooden base and latitude and longitude lines" "ๆฉ่ฒ็ตๅจๆ้ป๏ผๆ็ฃจๆ็ป่" \
--output_root outputs/textto3d
```
---
### โ๏ธ Service
Run the texture generation service locally.
Models downloaded automatically on first run, see `download_kolors_weights`, `geo_cond_mv`.
```sh
python apps/texture_edit.py
```
### โก API
```sh
bash embodied_gen/scripts/texture_gen.sh \
--mesh_path "apps/assets/example_texture/meshes/robot_text.obj" \
--prompt "ไธพ็็ๅญ็ๅๅฎ้ฃๆ ผๆบๅจไบบ๏ผๅคง็ผ็๏ผ็ๅญไธๅ็โHelloโ็ๆๅญ" \
--output_root "outputs/texture_gen/" \
--uuid "robot_text"
```
---
---
---
![]() |
![]() |
![]() |
![]() |