March 12, 2026
Turbo… unless your data shapeshifts
Show HN: XLA-based array computing framework for R
R gets a turbo boost with Anvil — but what happens when your data changes shape
TLDR: Anvil brings fast, on‑the‑fly compiled math and GPU power to R, plus automatic differentiation. The top reaction: excitement tempered by worries about variable‑size inputs and repeated recompiles, raising the big question—blazing fast for steady shapes, but what if your data keeps changing?
R just got a flashy new toy: Anvil, a framework that promises “speed of light” number‑crunching by compiling your code on the fly and running it on CPUs or GPUs. It even does automatic differentiation—the math trick behind training neural nets and optimizing models. The devs say most of it is written in R, and it leans on Google’s OpenXLA under the hood. So yes, the hype meter lit up.
But the very first commenter slammed the brakes with the buzzkill question of the day: what about variable‑size inputs? In plain English: if your data doesn’t always look the same, does Anvil have to rebuild everything each time? The commenter, who’s been fighting similar shape headaches running JAX (a popular Python tool) models in C++, suspects recompiling for every new shape could turn “speed of light” into “speed of coffee break.”
That sparked a very clear vibe: excitement, with a side of caution. Fans liked the idea of R doing GPU tricks without leaving home. Skeptics eyed the fine print: Anvil recompiles for each new input shape, which can be great for speed once warmed up—but costly if your data keeps changing. Cue the jokes: “Fast—if your tensors don’t shapeshift.” The crowd’s mood? Hyped, but watching the shapes.
Key Points
- •Anvil adds JIT compilation and backward-mode automatic differentiation to R for numerical computing.
- •Programs run on CPU and GPU backends; code is compiled into a single kernel for speed.
- •Installation options include pak, r-universe configuration, building from source (C++20, libprotobuf, protobuf-compiler), and Docker images.
- •Gradients are supported for functions with scalar outputs, demonstrated via quick-start examples with AnvilTensor.
- •Anvil recompiles for each unique input shape, enabling memory optimizations but adding compilation overhead for fast-running programs.