The Only Moat Left Is Knowing Things

AI writes fast, but readers want real scars — comments yell ‘slop’ and ‘LinkedIn fluff’

TLDR: The author says AI made writing easy, so only lived experience and visible effort still stand out. The comments clap back, branding it ‘AI slop’ and ‘LinkedIn fluff,’ while others argue the real edge is correct contrarian insight—not performative difficulty—in a world drowning in copy-paste content.

A marketer claims the last real advantage isn’t fancy tools but knowing things other people don’t — and proving it with hard-earned experience. They say AI can spit out clean copy in seconds, so the new “moat” is personal data, real failures, and effort you can feel. Stats get dropped (hello, Originality.ai: half of LinkedIn is likely AI-written), and a bold test is proposed: if a chatbot with Google could write it in one prompt, it won’t make you memorable. Cue the community fireworks.

The top vibe? Roast mode. One commenter called it “AI slop,” another labeled it “LinkedIn fluff,” and several rolled eyes at the crypto-ish “Proof of Work” energy. A skeptic warned that readers can’t reliably spot machine-written text anyway — so how are we supposed to judge “effort”? Meanwhile, a contrarian camp argued the real edge is correct contrarian thinking, not performative difficulty. And one writer dunked on the “hard equals good” idea, saying their best insights often flowed effortlessly — because the hard work happened before fingers hit the keyboard. Memes flew about “12 seconds to write, 5 minutes to regret reading,” and the author’s “Bookmark Game” got turned into a drinking game. The moat isn’t just knowledge — it’s surviving the comments.

Key Points

  • The author contends that content production is no longer a competitive advantage due to widespread access to AI and SEO tools.
  • Citing Originality.ai, the article states 54% of LinkedIn posts and 15% of Reddit posts are likely AI-written, with Reddit up 146% since 2021.
  • Differentiation now depends on unique inputs: proprietary data, lived experiences, and observations not present in training data.
  • An authenticity test is proposed: if an LLM with Google could generate the piece via a single prompt, it’s coverage, not differentiation.
  • The article recommends showing “proof of work” (custom visuals, interactivity, synthesis) and leveraging internal data, documented failures, and evidence-backed opinions.

Hottest takes

"Ironically this reads like AI slop." — jdthedisciple
"Big LinkedIn post on a concept with little proof." — _tk_
"Probably my best and most insightful stuff has been produced more or less effortlessly" — bschne
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.