Lil' Fun Langs' Guts

Tiny languages, big drama: is 'lazy' code genius or just messy

TLDR: The post explains how tiny languages run and pits 'lazy' computing against cleaner 'do it now' designs. Commenters brought Raku history and a 'share the goodies' vibe, sparking debate over power features, debugging headaches, and whether laziness is a virtue — a nerdy guide with real-world stakes.

Today’s tiny-language explainer pulled the curtain on how code gets turned into running programs, then dropped the spicy question: is being “lazy” smart or sloppy? Think “lazy” as waiting to do work until you must; “strict” as doing it right away. The thread lit up. Minimalists cheered the simple path — easier to debug, fewer moving parts — while power-users swooned over infinite lists and clever tricks, calling laziness a superpower. Skeptics fired back: hidden thunks (delayed work) make bugs slippery and stacks unreadable. Cue the popcorn.

History buffs showed up too. “This looks very much fun,” wrote [librasteve], dropping lore that early Raku folks were “Haskell heads,” with PUGS in Haskell by Audrey Tang, and teasing a self-hosting vibe as Raku’s parser gets rewritten in Raku. That synced with the post’s love for bootstrapping: compilers that build themselves (what is bootstrapping?). Meanwhile [esafak] played peacemaker: share this knowledge so other languages can adopt the good bits and “we can all have nice things.” Commenters riffed on the author’s pancreas joke and whether laziness is actually a virtue, while others traded cook-off metaphors over “curried” functions (serving arguments one at a time) versus “bland” (all at once). Nerdy? Yes. But the crowd was having fun.

Key Points

  • Haskell-like compilers typically follow phases from lexing and parsing through normalization (ANF/K), closure conversion, code generation (to asm/bytecode/C/LLVM), register allocation, and runtime system setup.
  • Strict vs. lazy evaluation impose different implementation costs: lazy requires thunks, update frames, and calling conventions (eval/apply or push/enter), increasing runtime complexity (~500–2000 LOC C vs. ~200 LOC C).
  • Lazy evaluation enables infinite collections, evaluating elements only when needed; the STG machine is the standard model for implementing laziness.
  • MicroHs avoids the STG machine by compiling to combinatory logic and using graph reduction; its reducer is written in C and the compiler can compile itself, needing only a C compiler to bootstrap.
  • Curried vs. bland function designs affect code generation: curried calls can incur heap allocations without arity analysis, while bland designs (e.g., MinCaml, OCaml internally, Grace, EYG) treat multiple parameters directly.

Hottest takes

"Haskell heads" — librasteve
"rewriting the Raku parser as a Raku Grammar" — librasteve
"so we can all have nice things" — esafak
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.