November 12, 2025
Grandpa Mac goes AI
The PowerPC Has Still Got It (Llama on G4 Laptop)
2005 PowerBook runs AI—nostalgia cheers, pedants rage, speed crawls
TLDR: A 2005 PowerBook G4 ran a tiny AI model after clever tweaks, at about 0.88 tokens/sec. Commenters argued over Apple’s “custom” silicon, claimed AI is just math, and flexed vintage rigs—proof that old machines can play, even if they’re painfully slow.
A vintage Mac from 2005 just spat out AI stories—very slowly—and the comments went full time machine. Andrew Rossignol got a PowerBook G4 to run a tiny storytelling model using a tweaked version of llama2.c, converting data formats and coaxing the old chip’s vector engine (AltiVec) to squeeze out speed. The numbers? 0.77 tokens per second, nudged to 0.88 with optimizations—about four minutes per paragraph. The crowd split into camps: the “AI is just math” crew, the “Apple didn’t make PowerPC” pedants, and the “I still love that 12-inch G4” nostalgia squad.
The hottest take came from anon291: AI isn’t magic—just math and memory. Then jchw rolled in to fact-check the headline, arguing Apple didn’t design PowerPC, so calling it “custom” is misleading. Meanwhile, the hardware flexers arrived: buildbot casually dropped a list of retro systems they’ve jammed AI into like it’s their quirky hobby. Nostalgia vibes peaked with fans declaring the 12-inch G4 “best laptop ever,” while practical users like markgall admitted it still handles real work—until the modern web shows up. It’s a perfect retro showdown: slow-cooked AI, pedant wars over what counts as “custom silicon,” and a wave of cozy Mac memories, all stirred by one stubbornly alive 20-year-old laptop. Read Rossignol’s write-up here: theresistornetwork.com.
Key Points
- •A 2005 PowerBook G4 (PowerPC, 1.5GHz, 1GB RAM, 32-bit) was used to run a modern LLM.
- •The experiment used a fork of Andrej Karpathy’s open-source llama2.c with the 110M-parameter TinyStories model.
- •Software was modified for PowerPC’s big-endian architecture, converting checkpoints and tokenizer data.
- •Manual weight copying was required due to memory alignment, instead of typical x86 memory-mapping.
- •Performance reached 0.77 tokens/s, improved to 0.88 tokens/s using AltiVec; a short paragraph took ~4 minutes.