Ars Technica: Our newsroom AI policy

Ars says 'humans write the news'—readers clap, side‑eye, and meme

TLDR: Ars Technica unveiled a policy saying humans write the news, with AI limited to supervised, minor help. Commenters split between applauding a much‑needed trust boost and blasting “AI slop,” warning about polluted content wells and recalling a recent retraction as proof that accountability really matters.

Ars Technica just posted its newsroom AI rules, promising that stories are written by people, with AI only allowed for light editing and research—and always checked by humans. The policy also plants a big flag on accountability: if a writer uses a tool, the human is still responsible. Cue the comment section: part standing ovation, part courtroom cross‑examination.

Fans of the move are calling it a “trust premium,” with one reader flatly saying credibility is the new currency. Skeptics rolled in with spicy metaphors: one top‑voted zinger warns AI is “peeing in its own water source,” arguing the web needs human‑made work to survive—and readers need a reason to create it. Others brought receipts, linking to Crikey’s blunt stance—“AI‑generated news is unhuman slop”—and past retractions, reminding everyone what’s at stake when machines mangle quotes.

There’s also some inside‑baseball drama: commenters winked at an earlier Ars hiccup where a staffer reportedly leaned on AI for quotes, leading to a retraction—exactly the kind of mess this policy’s “humans are responsible” line seems to address. And, in true internet fashion, one nitpicker even scolded the headline formatting. Verdict from the crowd: strong policy, sharper memes, and a very loud reminder that trust beats clicks.

Key Points

  • Ars Technica published a reader-facing policy on the use of generative AI in its editorial workflow.
  • All reporting, analysis, and commentary are human-authored; AI does not write stories or generate images, audio, or video content.
  • AI tools may assist with editing tasks (grammar, style, structure) under standards and human oversight.
  • When AI outputs are reported on, they are disclosed and visually set apart as exemplar material.
  • AI-assisted research is allowed with vetted tools, but AI is not an authoritative source and all information must be verified; attribution to named sources comes from direct engagement.

Hottest takes

"AI is in danger of peeing in it's own water source" — legitster
"AI-generated news is unhuman slop. Crikey is banning it" — defrost
"Trust, reputation, and credibility will become (even more of) a premium." — ares623
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.