FORTH? Really!?

Old-school stack talk beats AI puzzles; veterans cheer, skeptics sigh

TLDR: A benchmark says bottom‑up “postfix” beats top‑down “prefix” when chatbots build simple even/odd trees, with “thinking mode” and a bigger model winning out. The comments split between veterans saying “we knew this decades ago,” excited tinkerers sharing projects, and jokers memeing the return of Forth‑style thinking—potentially a big deal for faster AI.

The post claims today’s chatbots may think more like Forth—an old, stack-based way of writing code—than like the top‑down styles we’re used to. To prove it, the author ran a quirky test: build “parity trees” (basically check if groups of numbers are even or odd) in two styles. When bots had to answer first and fill in details later (prefix), they struggled; when they built from the bottom up (postfix), they crushed it. Bonus drama: a “thinking mode” made models smarter, and the bigger model (Opus) outperformed the smaller one (Haiku). Cue the victory horns for the stack lovers.

The comments lit up. rescrv kicked off a debate asking if bots would work better with postfix‑style languages. jandrewrogers stormed in with a veteran flex: this is old news, saying concatenative (stacky) languages are “nearly ideal” for machine learning on chips. d3nit cracked up that it wasn’t yet another “Forth Fibonacci” demo. codr7 dropped their own project and the line “Forth is a good starting point imo,” linking to shik. Then haolez tossed a curveball—“Diffusion text models to the rescue! :)”—like a meme grenade. The split is clear: enthusiasts see a path to faster, cheaper AI; skeptics see a 1970s rerun. Memes about “The Stack Strikes Back” and the “postfix cult” are already flying, with jokers telling you to read this sentence backwards for better accuracy.

Key Points

  • The article argues concatenative/stack-oriented languages (e.g., Forth) may align better with transformer-based LLMs than top-down recursive approaches.
  • A database join example shows a local rewrite (BUILD PROBE → DUP STATS SWAP BUILD [PUSHDOWN] DIP PROBE) enabling sideways-information-passing joins in an associative language.
  • The author proposes using finite-automata transformations over text subsequences to create database-layer optimization passes.
  • A benchmark constructs parity trees over number sequences to test whether order (prefix vs. postfix) affects transformer performance.
  • Results across Opus and Haiku show thinking > non-thinking, Opus > Haiku, and postfix > prefix, with accuracies: Haiku (thinking) 88.3%/36.7%, Haiku (no thinking) 6.7%/4.3%, Opus (thinking) 98.3%/81.3%, Opus (no thinking) 50.0%/9.7% (postfix/prefix).

Hottest takes

"whether LLMs would do better if the language had properties similar to postfix-notation." — rescrv
"nearly ideal properties for efficient universal learning on silicon" — jandrewrogers
"I was happy to see someone actually using it for anything besides proving their interpreter with a Fibonacci calculation." — d3nit
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.