April 11, 2026
Maps, caps, and clapbacks
Borges' cartographers and the tacit skill of reading LM output
Philosophy vs. Grammar Police: lowercase sparks a caps-lock war
TLDR: An essay warns that AI chatbots are turning from helpful summaries into the “map” that risks replacing reality, urging readers to stay skeptical. The comments blew up over the author’s lowercase style while others defended the rough-edge thinking, and developers debated whether AI’s telltale “smell” will ever disappear.
An essay comparing AI chatbots to maps—and warning that these “maps” might start replacing reality—sent the comments section into full-on culture clash. The author invokes Borges’ giant, useless map and pop-philosopher Jean Baudrillard to argue that chatbots can be faithful, distorted, hollow, or downright detached from reality. The big idea: we’re using these tools for reading, coding, and thinking, so we need better map-reading muscles before the map becomes the territory. Read the essay here.
But the crowd? Split. One camp fixated on the author’s lowercase style, with the Grammar Police declaring, “no caps, no clicks.” Another camp rallied behind the author’s vibe, pointing to the bio’s promise of “rough edges included” as proof this is thinking out loud—typos and all. Meanwhile, working devs chimed in with real-world grit: AI outputs still smell, like suspicious code, and they’re wondering if that stink will ever fade or if “smell detectors” are the new must-have skill.
There were jokes, of course: “This map doesn’t have Caps Lock,” and “Borges’ Empire, now available in lowercase.” A few even teased a new stage of reality: Stage Four is when the comments become the story. And today, reader, they did.
Key Points
- •The article uses Borges’ map parable to argue that representations are valuable due to compression and can become useless when fidelity overwhelms abstraction.
- •It applies Baudrillard’s four-stage model to LMs, showing they can simultaneously fit different stages depending on use.
- •LMs can distort by offering averaged, coherent answers that mask ongoing debates, exemplified by explanations of the 2008 financial crisis.
- •Reliance on LM outputs may reduce engagement with primary sources (stage three), and a future dominated by model-generated content risks detachment from reality (stage four).
- •LM outputs are personalized and malleable, varying with prompts and user backgrounds, increasing the need for skills that maintain connection to the underlying domain.