Upgrade decision

Should I upgrade from 0.4.4?

5 min read · for SO-101 users

Probably not yet. ACT and Diffusion code is byte-identical between 0.4.4 and 0.5.0. The upgrade asks you to rebuild your Python environment to 3.12 and migrate to transformers v5 — for changes that won't help most hobbyists. Stay on 0.4.4 unless you specifically need pi-Gemma improvements or the new cudnn_deterministic flag.

What this means for you: if you train ACT or Diffusion on SO-101 demos, the upgrade is pure cost (rebuild the env, migrate transformers) with no real benefit. If you fine-tune π0 or π0.5, you may need to re-validate your weights — image normalization changed silently.

Do you train ACT or Diffusion only?
   │
   ├── Yes → STAY on 0.4.4
   │
   └── No, I use π0 / π0.5 / SmolVLA / fine-tunes
         │
         └── Want pi-Gemma backbone refactor or cuDNN-determinism?
               │
               ├── Yes → Upgrade, re-validate fine-tunes
               │
               └── No  → STAY on 0.4.4

The basics

What's NEW in 0.5.0?

What's BROKEN by upgrading?

What does NOT change?

ACT, Diffusion Policy, the LeRobotDataset format on disk, the async-inference stack, the RL stack, and the motor / camera drivers (other than G1) are all untouched. Your existing recordings load fine on either version.

How you actually use it

If you do upgrade, here's the four-step migration:

# 1) Snapshot your current environment pip freeze > lerobot-v044-frozen.txt # 2) Build a fresh Python 3.12 env conda create -n lerobot-v050 python=3.12 -y conda activate lerobot-v050 # 3) Install pip install lerobot==0.5.0 # 4) Re-validate any pi0/π0.5/SmolVLA fine-tunes lerobot-eval --policy.path=path/to/old_pi0_finetune ...

If anything breaks, roll back — the dataset format is unchanged:

pip install lerobot==0.4.4

Things to know

HIL-SERL is broken under transformers v5

The HIL-SERL reward model helper2424/resnet10 won't load under transformers v5 — the Hub model lacks self.post_init() so from_pretrained fails. If you do online RL, either pin transformers v4 manually or stay on 0.4.4 for that workflow.

huggingface-cli is renamed to hf

The huggingface-hub 0.34→1.0 jump dropped the old huggingface-cli binary in favor of hf. Update any shell scripts.

lerobot.available_policies is stale

The constant lerobot.available_policies still lists only four policies in both 0.4.4 and 0.5.0, even though the repo ships 14 policy folders. Don't rely on it programmatically. (It's not a 0.5.0 regression — just a long-standing wart.)

Dataset format is unchanged

Both versions emit the same v3.0 LeRobotDataset format on disk. No re-recording needed. ACT and Diffusion folders are byte-identical between tags — your existing code paths don't change.

Optional: under the hood
Show the full diff (8 added, 2 removed, 96 modified)

v0.5.0 was released 2026-03-09, ten days after v0.4.4 (2026-02-27), and bundles 26 PRs. Diff sweep totals (diff -rq, excluding .git): 8 added, 2 removed, 96 modified.

Added (8 new files)

  • AI_POLICY.md — project policy for AI-assisted contributions.
  • src/lerobot/policies/pi_gemma.py (363 lines, PR #2964) — refactor target for pi0/pi05. PaliGemmaForConditionalGenerationWithPiGemma, PiGemmaForCausalLM, etc. Compat layer for transformers v5.
  • src/lerobot/robots/unitree_g1/g1_kinematics.py (287 lines) — WeightedMovingFilter + casadi/pinocchio IK. Replaces the deleted robot_kinematic_processor.py.
  • src/lerobot/robots/unitree_g1/gr00t_locomotion.py (205 lines) — ONNX-runtime locomotion policy on top of NVIDIA GR00T weights. 50 Hz, dual-ONNX.
  • src/lerobot/robots/unitree_g1/holosoma_locomotion.py (214 lines) — alternate ONNX locomotion controller (Amazon FAR Holosoma). 200 Hz, single-ONNX.
  • tests/robots/test_unitree_g1.py + tests/teleoperators/test_unitree_g1_teleoperator.py — first proper test coverage for G1.
  • examples/dataset/slurm_compute_rabc.py (490 lines, PR #3041) — SLURM-distributed SARM RA-BC progress-annotation pipeline. Useful only with a SLURM cluster.

Removed (2 files)

Both removals are refactors, not deletions:

  • src/lerobot/robots/unitree_g1/robot_kinematic_processor.py (313 lines) — functionality moved into the new g1_kinematics.py.
  • examples/unitree_g1/ — the two example scripts (gr00t_locomotion.py, holosoma_locomotion.py) were promoted into the library proper at src/lerobot/robots/unitree_g1/.

Modified (top of the 96-file list)

  • Type-system modernization — many files rewritten to use PEP 695 native generics (class Foo[T]: instead of Generic[T]). Non-breaking for callers but compile-time errors on Python <3.12. This is the main reason the Python floor moved.
  • Processor pipeline (state padding moved)policies/pi05/processor_pi05.py:25-72 removed the explicit state = pad_vector(...) step. Padding still happens but inside modeling_pi05.pad_vector at action-tensor build time.
  • Scriptslerobot_record.py and lerobot_teleoperate.py gained teleop.send_feedback(obs) calls when robot.name == "unitree_g1". Non-G1 robots ignore. lerobot_train.py gained the cuDNN-determinism switch.
  • Docs — updated to use hf CLI instead of huggingface-cli (PR #3071). Pre-commit and Dockerfiles bumped to py312.
  • HIL-SERL TODOpolicies/sac/reward_model/configuration_classifier.py:36 adds a TODO comment noting helper2424/resnet10 won't load under transformers v5.

Release-note headlines

feat(robots): Unitree G1 WBC implementation by @nepyope in #2876
feat(train): add cudnn_deterministic option by @imgeorgiev in #3102
chore(dependencies): bump transformers v5 by @imstevenpmwork in #2964
feat(dependencies): require Python 3.12+ by @imstevenpmwork in #3023
Feat/slurm compute rabc script by @pkooij in #3041
chore: add AI policy by @imstevenpmwork in #3055
chore(readme): update citation with ICLR26 paper by @imstevenpmwork in #3107
Show the breaking dependency table (every pyproject.toml row that changed)
Fieldv0.4.4v0.5.0Source
requires-python >=3.10 >=3.12 pyproject.toml:32
transformers-dep >=4.57.1,<5.0.0 >=5.3.0,<6.0.0 pyproject.toml:102
huggingface-hub [hf-transfer,cli]>=0.34.2,<0.36.0 >=1.0.0,<2.0.0 pyproject.toml:65
numpy (implicit) >=2.0.0,<2.3.0 (top-level) pyproject.toml:69
placo-dep <0.10.0 <0.9.17 pyproject.toml:101
unitree-sdk2 (absent) ==1.0.1 pyproject.toml:122
pi extra custom transformers @ git+...@fix/lerobot_openpi branch regular transformers-dep pyproject.toml:147
wallx extra custom transformers branch mainline transformers v5 pyproject.toml:141-145
[all] extra pi / wallx commented out both re-enabled pyproject.toml:194-195

New consolidated extra deps in v0.5.0: peft-dep, scipy-dep, qwen-vl-utils-dep, matplotlib-dep (pyproject.toml:105-108).

The huggingface-hub dropped its [hf-transfer, cli] extras; the huggingface-cli binary is now hf. The big [tool.uv] conflicts block (~80 lines) for wallx/pi mutual-exclusion was removed because both extras now share mainline transformers v5.

Show the silent numeric changes in pi0 / π0.5 with file refs

Beyond the dependency bumps, π0 and π0.5 changed image normalization in a way that silently affects model outputs. Any π0 / π0.5 fine-tunes from v0.4.4 must be re-validated or re-finetuned — the model sees a different image distribution.

v0.4.4
resized_images.clamp(-1.0, 1.0)
constant_value = -1.0  # for pad
v0.5.0
resized_images.clamp(0.0, 1.0)
constant_value = 0.0   # for pad

Sources (line numbers given for both versions since they shifted):

  • v0.4.4 src/lerobot/policies/pi0/modeling_pi0.py:194,205 → v0.5.0 src/lerobot/policies/pi0/modeling_pi0.py:202,213
  • v0.4.4 src/lerobot/policies/pi05/modeling_pi05.py:192,203 → v0.5.0 src/lerobot/policies/pi05/modeling_pi05.py:199,210

Two more silent changes in pi0 / pi05 internals:

  • Gemma / PaliGemma transformer access now routes through paligemma.model.language_model.layers[...] (added .model. segment) — transformers v5 module path change.
  • dtype= is used instead of the deprecated torch_dtype= kwarg.
Quick test before re-finetuning

As a fast sanity check, branch the preprocessing to match v0.4.4 normalization (clamp(-1, 1), pad value -1.0) and re-run inference on a held-out batch. Compare actions against your v0.4.4 outputs. If they match within tolerance, the weights are recoverable without a full re-train.

Where to go next →

Back to the hub — that's the end of the tour. From the index you can revisit any page or jump to the glossary.