mjlab combines Isaac Lab's manager-based API with MuJoCo Warp, a GPU-accelerated version of MuJoCo. The framework provides composable building blocks for environment design, with minimal dependencies and direct access to native MuJoCo data structures.
mjlab requires an NVIDIA GPU for training. macOS is supported for evaluation only.
Try it now:
Run the demo (no installation needed):
uvx --from mjlab --refresh \
--with "mujoco-warp @ git+https://github.com/google-deepmind/mujoco_warp@7c20a44bfed722e6415235792a1b247ea6b6a6d3" \
demoOr try in Google Colab (no local setup required).
Install from source:
git clone https://github.com/mujocolab/mjlab.git && cd mjlab
uv run demoFor alternative installation methods (PyPI, Docker), see the Installation Guide.
Train a Unitree G1 humanoid to follow velocity commands on flat terrain:
uv run train Mjlab-Velocity-Flat-Unitree-G1 --env.scene.num-envs 4096Multi-GPU Training: Scale to multiple GPUs using --gpu-ids:
uv run train Mjlab-Velocity-Flat-Unitree-G1 \
--gpu-ids 0 1 \
--env.scene.num-envs 4096See the Distributed Training guide for details.
Evaluate a policy while training (fetches latest checkpoint from Weights & Biases):
uv run play Mjlab-Velocity-Flat-Unitree-G1 --wandb-run-path your-org/mjlab/run-idTrain a humanoid to mimic reference motions. mjlab uses WandB to manage motion datasets. See the motion preprocessing documentation for setup instructions.
uv run train Mjlab-Tracking-Flat-Unitree-G1 --registry-name your-org/motions/motion-name --env.scene.num-envs 4096
uv run play Mjlab-Tracking-Flat-Unitree-G1 --wandb-run-path your-org/mjlab/run-idUse built-in agents to sanity check your MDP before training:
uv run play Mjlab-Your-Task-Id --agent zero # Sends zero actions
uv run play Mjlab-Your-Task-Id --agent random # Sends uniform random actionsWhen running motion-tracking tasks, add --registry-name your-org/motions/motion-name to the command.
Full documentation is available at mujocolab.github.io/mjlab.
make test # Run all tests
make test-fast # Skip slow tests
make format # Format and lint
make docs # Build docs locallyFor development setup: uvx pre-commit install
If you use mjlab in your research, please cite:
@misc{zakka2026mjlablightweightframeworkgpuaccelerated,
title={mjlab: A Lightweight Framework for GPU-Accelerated Robot Learning},
author={Kevin Zakka and Qiayuan Liao and Brent Yi and Louis Le Lay and Koushil Sreenath and Pieter Abbeel},
year={2026},
eprint={2601.22074},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2601.22074},
}mjlab is licensed under the Apache License, Version 2.0.
Some portions of mjlab are forked from external projects:
src/mjlab/utils/lab_api/— Utilities forked from NVIDIA Isaac Lab (BSD-3-Clause license, see file headers)
Forked components retain their original licenses. See file headers for details.
mjlab wouldn't exist without the excellent work of the Isaac Lab team, whose API design and abstractions mjlab builds upon.
Thanks to the MuJoCo Warp team — especially Erik Frey and Taylor Howell — for answering our questions, giving helpful feedback, and implementing features based on our requests countless times.
