December 10, 2025
Flat earth, but for code?
Super-Flat ASTs
Flat code trees for speed—fans cheer, skeptics ask “but at what cost”
TLDR: A developer flattened code structures and reused names to boost speed and cut memory. Comments clash: some say the parser’s gains aren’t worth worse debugging, others cite Zig and Clang as proof it works—plus confusion over chart colors fueled the debate.
A dev flattened their “code tree” and started reusing names behind the scenes to make parsing faster and lighter on memory—think less clutter, more zoom. The community immediately split into camps. The performance crowd cheered the simple switch to string interning (reusing repeated names) and “super-flat” layouts, pointing to real-world comps like Zig’s compiler. One commenter even claimed the new design runs about twice as fast. Cue victory confetti.
But the vibe wasn’t all high-fives. The skeptics argued that flat layouts trade away the stuff humans need: easy debugging, quick “print this node and its children,” and flexible tools. One veteran voice said they’d “take the performance hit” because parsing isn’t the slow part of most compilers anyway. Another chimed in that Clang already hides flat storage behind friendly APIs—so yes, speed, but don’t torture developers. Meanwhile, meta-drama erupted when someone noticed the graph colors and scales changed between charts, spawning jokes like “speed gains: now you see it, now you don’t.” In short: Team Speed vs. Team Sanity, with bonus memes about “flat earth, but for code.”
Key Points
- •The language “simp” uses a recursive descent parser that builds an AST for compilation passes.
- •The existing AST stores recursive nodes and sequences using heap allocations, leading to high memory usage.
- •Benchmarking focuses on throughput (lines of code per second) and maximum memory usage across inputs up to 100 MB.
- •Throughput is plotted against log10(input size); memory usage scales approximately linearly with input size.
- •String interning replaces owned Strings with identifier IDs to amortize allocations and improve parsing efficiency.