The Future of Everything Is Lies, I Guess: Work

Witchcraft coding? Hacker News bickers over CEOs, fonts, and a UK block

TLDR: A viral essay says AI could turn coding into spell-casting and shift power to big tech. The comments explode over censorship in the UK, a mobile font meltdown, and a debate on whether CEOs deserve blame—turning a serious warning about work into a full-on culture clash.

A fiery new essay warns coding might turn into witchcraft—as in, people chanting prompts to AI and hoping the summoned bot spits out working code. The author says “AI coworkers” are overhyped, automation can break systems, and Big Tech could hoard even more power. He’s talking about LLMs (large language models, the text bots behind ChatGPT and friends) and worries that natural-language programming won’t guarantee correctness like traditional compilers do. Spooky vibes, but with real stakes.

And the crowd? Pure drama. One UK reader hit a brick wall: “Unavailable Due to the UK Online Safety Act.” Cue irony klaxon: a piece about truth and lies, shadow-banned by policy. Another thread devolved into font fury, with a mobile reader blasting the “obnoxious” typography—because nothing says future-of-work panic like three-word line breaks. Meanwhile, a commenter pushed back on the “evil CEO” narrative, saying stop mythologizing bosses and try building a better company yourself. Others rolled their eyes at yet another re-post, asking why this has camped on the front page all week and linking to past threads.

Between the witches vs. engineers imagery and debates over power, pay, and who’s actually in charge, the real magic trick today was turning a tech thinkpiece into a culture war—and a font fight.

Key Points

  • The article critiques the hype around “AI coworkers,” arguing automation can introduce risks like deskilling, bias, monitoring fatigue, and takeover hazards.
  • It notes recent rapid improvements in LLMs, with reports of successful code generation (e.g., implementing cryptography papers) and some companies relying heavily on LLM-produced code.
  • The author contrasts compilers’ semantic guarantees with LLM-driven natural language instructions, arguing LLMs lack reliable semantic preservation.
  • Small prompt variations can lead to materially different program behaviors, so human review remains necessary where correctness is critical.
  • The piece warns machine learning could accelerate labor displacement and further concentrate wealth in large tech firms, making the transition difficult.

Hottest takes

"Unavailable Due to the UK Online Safety Act" — hoppp
"Wow the typography is obnoxious on mobile" — mock-possum
"You should try to start a company and see if you can be one of the better ones" — greatpost
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.