Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?

HN wants human replies; critics say banning “my bot said” posts will only hide them

TLDR: Hacker News is debating whether to curb “I asked AI and it said” replies. Most commenters want human opinions, but many warn a ban would just push AI use underground, arguing community downvotes and norms beat new rules. It matters because AI-shaped comments are reshaping online discourse.

The orange site is in full gossip mode over a spicy Ask HN: should “I asked AI and it said…” replies get the boot? Fans of human-to-human chat say bot quotes are the new LMGTFY meme—lazy, noisy, and not why they open Hacker News. One user basically called them “worthless wastes of space,” while others said they join HN to read people, not parrots. The anti-bot camp dropped peak exasperation: If you want to know what an AI thinks, go ask it.

But the plot twist? Even the haters worry a ban would backfire. As one user deadpanned, people are “probably copy pasting already without that disclosure,” so forbidding it would just push the botting underground. Another zinger nailed the paradox: a ban “wouldn’t ban the behavior, just the disclosure.” Cue the confession-cam vibes. Meanwhile, the old-school HN ethos showed up: no need for a new rule—downvote and move on.

The thread linked examples of these bot-assisted replies—think walls of AI text in threads like this, this, and this. The drama is delicious: update the guidelines to block bot monologues, or let the community self-police? Either way, the crowd’s loudest chorus is clear: less bot ventriloquism, more human voices.

Key Points

  • The article proposes revisiting Hacker News guidelines regarding comments that quote LLM outputs.
  • It notes the rise of comments stating “I asked [LLM], and it said…,” citing examples via HN item links.
  • The author prefers human conversation and finds lengthy AI-generated text disruptive to reading human input.
  • Policy options include allowing such replies, discouraging critique of them, or adding a guideline to avoid copy-pasting large LLM outputs.
  • The article invites community input on whether and how to update HN guidelines to address AI-quoted comments.

Hottest takes

“If I wanted to know what an AI thinks I’ll ask it. I’m here because I want to know what other people think.” — gortok
“People are probably copy pasting already without that disclosure :(” — shishy
“This wouldn't ban the behavior, just the disclosure of it.” — chemotaxis
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.