This repository contains the render pipeline tools used for the generation of the BEDLAM2.0 synthetic video dataset (NeurIPS 2025, Datasets and Benchmarks track).
It includes automation scripts for SMPL-X data preparation in Blender, Unreal Engine 5.3.2 data import and rendering, and data post processing.
Related repositories:
- Machine Learning
- BEDLAM2 Retargeting
- Rendering: BEDLAM CVPR2023 render pipeline tools for Unreal 5.0
- Create animated SMPL-X bodies (locked head, no head bun, neutral model, UV map 2023) from SMPL-X animation data files and export in Alembic ABC format. SMPL-X pose correctives are baked in the Alembic geometry cache and will be used in Unreal without any additional software requirements.
- Details: blender/smplx_anim_to_alembic/
- If you want to create modified bodies for shoe rendering (toeless sock feet) then please use the corresponding code repository on the BEDLAM2 project website
- Import simulated clothing and SMPL-X Alembic ABC files as
GeometryCache - Import body and clothing textures
- Import high-dynamic range panoramic images (HDRIs) for image-based lighting
- Import hair grooms
- Import shoe color textures and displacement maps
- Details: unreal/import/
BEDLAM2 Unreal render setup utilizes a data-driven design approach where external data files (be_seq.csv, be_camera_animations.json) are used to define the setup of the required Unreal assets for rendering.
- Generate body scene definition (
be_seq.csv) based on randomization configuration for all the sequences in the desired render job - Generate camera motion definition for all the sequences in the desired render job (
be_camera_animations.json) - Details: tools/sequence_generation/
- Auto-generate Unreal Sequencer
LevelSequenceassets based on selected body scene and camera motion setup files - Render generated Sequencer assets with Movie Render Queue using DX12 rasterizer with 7 temporal samples for motion blur
- If depth maps and segmentation masks are desired, a second optional render pass can output EXR files (16-bit float, multilayer, cryptomatte) without spatial and temporal samples
- Camera ground truth poses in Unreal coordinates are stored in EXR image metadata during rendering and later extracted in post-process stage to CSV and JSON format
- Details: unreal/render/
- Extract world-space camera ground truth information for center subframe
- Generate MP4 movies from image sequences with ffmpeg
- Generate overview images for first/middle/last image of each sequence
- Generate camera motion plots from extracted camera ground truth
- Extract separate depth maps (EXR), segmentation masks (PNG) and normal images (world-space or camera-space, PNG) if required EXR data is available
- Details: tools/post_render_pipeline/be_post_render_pipeline.sh
- The starter pack contains a subset of 150 motions with simulated clothing for 51 body shapes setup for rendering with shoes (toeless sock feet) and hair. Also included are body and clothing textures, hair and shoe assets, and HDR images.
- You can use this to test rendering. Data preparation for Unreal and the data import render pipeline stages are not needed since these render assets are already setup in Unreal.
- Details: unreal/render/unreal_quickstart.md
- Rendering: Unreal Engine 5.3.2 for Windows and good knowledge of how to use it
- Data preparation: Blender (4.0.2 or later)
- Windows 11
- Data preparation stage will likely also work under Linux or macOS thanks to Blender but we have not tested this and are not providing support for this option
- Windows WSL2 subsystem for Linux with Ubuntu 22.04 or 24.04
- Python for Windows (3.10.6 or later)
- Recommended PC Hardware:
- CPU: Modern multi-core CPU with high clock speed (Intel i9-12900K, AMD Ryzen Threadripper PRO 7955WX)
- GPU: NVIDIA RTX3090 or higher
- Memory: 128GB or more
- Storage: Fast SSD with 16TB of free space
- Clone repository to
C:\bedlam2\renderfolder
C:\bedlam2\render
├── LICENSE.md
├── README.md
├── blender
├── config
├── stats
├── tools
└── unreal
-
Create WSL2 Python 3.10.6+ venv at
$HOME/.virtualenvs/bedlam2/ -
Activate it and install required packages
pip install -r requirements.txt
- GitHub
- Issues
- Pull requests
- We are not accepting unrequested pull requests
@inproceedings{tesch2025bedlam2,
title={{BEDLAM}2.0: Synthetic humans and cameras in motion},
author={Joachim Tesch and Giorgio Becherini and Prerana Achar and Anastasios Yiannakidis and Muhammed Kocabas and Priyanka Patel and Michael J. Black},
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2025}
}