Xortran - A PDP-11 Neural Network With Backpropagation in Fortran IV

1970s computer learns a logic trick; comments erupt: nostalgia vs “why”

TLDR: A coder made a tiny learning program in 1970s FORTRAN on a PDP‑11 that solves a simple logic puzzle. Comments swing from nostalgic “I was wrong” confessions to debates over usefulness, showing retro tech can still teach modern lessons and spark big feelings.

The internet just watched a 1970s minicomputer “learn” a logic puzzle—and the comments are the true show. A developer built XORTRAN, a tiny neural network (think baby AI) in FORTRAN IV and ran it on a vintage PDP‑11 under the RT‑11 operating system via the SIMH emulator. With only 32KB of memory, it slowly mastered XOR, a simple yes/no brain teaser. Cue the crowd: retro fans are buzzing, modern AI folks are shocked and delighted, and the pragmatists are asking, “But… why?”

The hottest take comes from one commenter who once thought early neural nets on PDP‑11s were a “waste of time” and now admits, “Silly me.” That confession lit up the thread: one camp cheers the history lesson (“backpropagation existed before GPUs!”); another shrugs, calling it a party trick. The debate boils down to vibe vs value—is this art, or just nerd cosplay?

Meanwhile, the jokes fly. People compared the PDP‑11 to “AI on a toaster,” riffed on leaky ReLU as “leaky capacitors,” and imagined training on punch cards. Screenshots of loss numbers every 100 loops turned into memes about the machine “trying its best.” Whether you see it as nerd nostalgia or mini‑miracle, the mood is clear: old tech just schooled the timeline—and folks have feelings about it. More on the PDP‑11: Wikipedia.

Key Points

  • XORTRAN implements a FORTRAN IV multilayer perceptron to learn XOR on a PDP‑11/34A under RT‑11, tested via SIMH.
  • The network uses one hidden layer (4 neurons, leaky ReLU), backpropagation with MSE loss, He‑like initialization (Box‑Muller), and a tanh output.
  • Compilation requires the DEC FORTRAN IV compiler (1974); execution needs at least 32 KB of memory and an FP11 floating-point processor.
  • Training of 17 parameters takes under a couple of minutes on real hardware; SIMH throttle 500K approximates realistic speed.
  • Output logs loss every 100 epochs and final predictions, showing convergence to the expected XOR targets.

Hottest takes

“a total waste of time” — jacobgorm
“Silly me.” — jacobgorm
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.