November 12, 2025
Stop and smell the throughput
Jasmine: A Simple, Performant and Scalable Jax-Based World Modeling Codebase
Jasmine promises faster robot learning; Reddit cheers reproducibility, fights JAX vs PyTorch
TLDR: Jasmine is a fast, scalable codebase for training world models with fully reproducible results. Commenters cheer the reproducibility and dream of smarter robots, while bickering over JAX vs PyTorch and joking that “hundreds of accelerators” means most of us will be spectating, not scaling.
Robots just got a hype injection: Jasmine drops a new codebase for “world models” (think: training AI to learn a simulated world) that runs fast and scales from a laptop to hundreds of accelerators with “minimal code changes.” It also boasts fully reproducible training and flexible “sharding,” aka splitting the work across many machines so nothing melts. The crowd? Loud. The top vibe is pure excitement: reproducibility is the community’s holy grail after too many late nights of “works on their machine, not mine.” One fan even called the reproducibility guarantee “really compelling,” while admitting this is “outside my wheelhouse” and still cheering that consumer robotics feels closer than ever. Then the drama kicks in. The JAX vs PyTorch cage match lit up: some praise JAX for speed; others eye-roll, asking why not stick with the tool everyone already uses. Skeptics dunk on the benchmark flex—speeding up the CoinRun case study (a game-like test from Procgen)—with memes like “Cool, my dishwasher still can’t load itself.” A recurring jab: “Hundreds of accelerators? I’m lucky if my single GPU doesn’t cry.” Still, the promise of clean, reproducible pipelines and big-data benchmarking has folks buzzing that this could finally bring order to world-model chaos.
Key Points
- •Jasmine is a JAX-based world modeling codebase designed for performance and scalability.
- •It scales from single hosts to hundreds of accelerators with minimal code changes.
- •Jasmine reproduces the CoinRun case study an order of magnitude faster than prior open implementations.
- •Performance gains stem from optimizations in data loading, training, and checkpointing.
- •The codebase guarantees fully reproducible training and supports diverse sharding configurations, enabling rigorous benchmarking with large-scale datasets.