April 16, 2026

AI in a trench coat, but it’s 1989

Show HN: MacMind – A transformer neural network in HyperCard on a 1989 Macintosh

A tiny “AI” on a 1989 Mac has the internet yelling wow and “prove it”

TLDR: A hobbyist squeezed a tiny chatbot-style brain into a 1989 Macintosh using only old-school scripting, proving the training process is simple math you can inspect. The crowd split between wowed nostalgics and “show me” skeptics—until a simulator link let everyone click, test, and grin at the throwback magic.

A retro Macintosh just crashed the modern AI party, and the crowd is loud about it. MacMind squeezes a tiny transformer—the kind of brain behind chatbots—into a 1989 Mac, written entirely in HyperTalk, the Mac’s old scripting language. It learns a number-shuffling trick (the first step of a classic signal-processing method) and lets you peek at every line of math. The goal: show that AI is math, not magic—and commenters are eating it up.

The hottest reaction? Awe. One user called it “like sending germ theory back to the ancient Greeks,” and the thread piled on with nostalgic cheers and brainy comparisons. Another voice of calm wonder said this proves AI progress isn’t just about bigger graphics cards—it’s clever math on any machine. But the peanut gallery has needs: “Any more demos of inference output?” one skeptic prodded, and the community delivered a crowd-pleaser link to a live HyperCard simulator so you can click and see it work yourself: hcsimulator.com/imports/MacMind---Trained-69E0132C.

Between throwback vibes and nerd joy, commenters kept zooming into the “attention map” (a visual of what the model focuses on) like it was a vintage sci‑fi prop. The memes wrote themselves: your grandpa’s beige Mac just did “AI.” The verdict? Half museum exhibit, half mic drop—and a surprisingly friendly reminder that today’s giant models and a 1989 Mac are playing the same math game.

Key Points

  • MacMind is a 1,216-parameter, single-layer, single-head transformer implemented entirely in HyperTalk on a Macintosh SE/30, with no compiled code or external libraries.
  • The model learns the 8-element bit-reversal permutation (the first step of the FFT) from random examples using self-attention and gradient descent, without being told the rule.
  • The HyperCard stack has five cards: Title, Training, Inference, Attention Map, and About, enabling training, testing, visualization, and explanation.
  • Training controls include Train 10, Train to 100%, and trainN via the Message Box; each step runs full forward/backprop with real-time metrics; a 30,000-character log limit requires manual clearing.
  • After training, the attention map reveals the FFT butterfly pattern, aligning with the Cooley–Tukey structure, illustrating that the same training math applies from retro hardware to modern LLMs.

Hottest takes

"This feels ... like the germ theory being transferred back to the ancient greeks" — gcanyon
"remind you how much of it is just more clever math" — edwin
"Any more demos of inference output?" — DetroitThrow
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.