December 29, 2025
Elon memes meet math dreams
Five Years of Tinygrad
Tinygrad turns 5: Big promises, bigger skeptics, and Elon vibes
TLDR: Tinygrad turns five with a lean, pure‑Python push to challenge NVIDIA and help AMD train huge AI models. The crowd is split: fans cheer the minimalist vision, while skeptics mock the “Elon process” hype and demand proof of what it replaces or has actually achieved.
Tinygrad just hit five years, bragging about a lean ~19k-line codebase, a “deconstructed company” run on Discord, and a bold plan: pure Python drivers, no dependencies, and a path to rival NVIDIA by owning the software stack. They’re even working with AMD to get massive Llama training on the MLPerf benchmark and say it already beats PyTorch on many workloads. The internet? Not buying the vibe—yet.
The strongest reactions: skeptics want receipts, not poetry. One commenter waved off the manifesto as “lots of words” and asked the basics: what’s working now, what’s shipped, what’s actually replaced? Another jab: calling it the “Elon process” made folks roll their eyes, while old-school fans remembered Geohot’s PlayStation rap and wondered if this is swagger over substance. Meanwhile, the “all Python” angle sparked a tech gossip moment—people joked about LLVM legend Chris Lattner lurking and asked if pure Python can really drive GPUs at scale.
Humor and memes flew: “make the requirements less dumb” became a catchphrase, and several quipped “put it on MLPerf or it didn’t happen.” Supporters love the tiny, focused code and public deals negotiated on Twitter; doubters think line-count flexing is a red flag. The drama? Vision vs. verification, Elon vibes vs. engineering receipts, all playing out in the comments of tinygrad.
Key Points
- •Tinygrad began on October 17, 2020, and now comprises ~18,935 lines of code, with a team of six at Tiny Corp.
- •The project is removing LLVM to achieve zero dependencies (except pure Python) for AMD GPU support, encompassing a frontend, graph compiler, runtimes, and drivers.
- •The article claims tinygrad outperforms PyTorch on many workloads and positions a software-first strategy to compete with NVIDIA.
- •Tiny Corp operates publicly via Discord and GitHub, funds development through a computer sales division (~$2M/year), and hires via repo contributions.
- •Tiny Corp has a contract with AMD to place MI350X on MLPerf for Llama 405B training, negotiated mostly on Twitter; mission is to commoditize the petaflop.