ExecuTorch-Ruby: Run PyTorch models in Ruby

Ruby gets AI superpowers—devs cheer while build errors scream

TLDR: A new Ruby gem lets apps run exported PyTorch models, bringing AI predictions into Rails without a Python service. The community split: Ruby fans celebrate the freedom, skeptics mock the build-from-source headaches and question using Meta’s runtime, while pragmatists say just Docker it and ship.

Ruby just got a flashy new trick: with ExecuTorch-Ruby, developers can run PyTorch AI models right inside their Ruby apps. In plain English: you train your model in Python, export it as a simple file, and Ruby reads that file to make predictions. The demo shows a model spitting out numbers like it’s doing math homework, and folks are sharing the gem as if it’s a cheat code for Rails.

Cue the chaos. Ruby loyalists are throwing confetti—“No more Python microservices!”—while skeptics clutch pearls at the install steps: you have to build Meta’s ExecuTorch (a lightweight AI engine) from source. Comments exploded with jokes like “Bundle install, but make it CrossFit,” and memes of Gemfile vs Gymfile as devs wrestled the C++17 requirement. Some called it a game-changer for inference (making predictions) while reminding everyone you still train models in Python. Others fretted about platform limits (macOS/Linux only), and warned the build errors are “vibes-based” at best.

There’s spicy debate over trust—Meta’s runtime in my Rails?—and whether ONNX would be safer or simpler. Pragmatists say “Docker it and move on.” Purists shout “Ruby isn’t for AI!” while Rails shops yell back, “It is now.” The CI tips and environment variables became their own mini soap opera, complete with copy-paste drama.

Key Points

  • ExecuTorch Ruby provides Ruby bindings to Meta’s ExecuTorch, enabling execution of PyTorch models (.pte) in Ruby applications.
  • Installation requires building ExecuTorch from source with specific CMake flags, then configuring Bundler and installing the executorch gem.
  • Usage includes creating tensors with inferred or explicit shapes and dtypes, loading .pte models, and running inference via predict/forward/call.
  • Model export from PyTorch to .pte uses torch.export and executorch.exir.to_edge, followed by writing the ExecuTorch program to a file.
  • Troubleshooting covers missing installations/headers and undefined symbols, with solutions including rebuilding extensions and linking extra libraries.

Hottest takes

“Ruby finally gets a seat at the AI table—no Python sidecar” — railsguy
“This is just ‘pip install pain’ but in Gem form” — snakes4life
“If I need a C++17 compiler for bundle install, I’m out” — bundle_burnout
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.