LLM may be standardizing human expression – and subtly influencing how we think

Workers say bosses sound like bots; some warn of a “second dark age”

TLDR: USC researchers warn chatbots are making our writing and thinking more uniform and urge more diverse training data. Commenters split between “my boss already talks like a bot,” “this is just old corporate blandness,” and “brace for a secretive dark age”—with a grammar snob heckling the headline.

USC researchers just dropped a spicy warning: AI chatbots are making us all sound the same—and maybe nudging how we think. They argue that large language models (the tech behind chatbots) flatten our quirks and prefer neat, step‑by‑step answers, and they want developers to train on more diverse voices to keep human creativity alive.

The comments lit up. One worker went full whistleblower: “Subtly? My team lead literally talks to me through his bot,” claiming his boss’s “thoughts” aren’t his own. A market believer pushed back, saying corporate dronespeak existed long before AI, and that distinctive voices will win in competition. Then the doomsayers arrived: one ominously predicted a “second dark age,” where experts keep tricks secret so bots can’t copy them. Others nitpicked the science: “It’s not explanation—it’s relabeling,” grumbled a skeptic, dunking on chatbots’ so‑called reasoning. And because it’s the internet, a resident pedant roasted the headline for using the wrong dash. Drama aside, many worry that even non‑users will conform if everyone around them speaks “bot” because it sounds more credible. The big question hanging over the thread: are we crafting our own ideas—or just hitting accept on the machine’s “good enough” suggestion?

Key Points

  • USC researchers published an opinion paper on March 11 in Trends in Cognitive Sciences warning that LLMs may homogenize language, thought, and reasoning.
  • The authors argue cognitive diversity is shrinking as widespread chatbot use standardizes expression and influences what is seen as credible or correct.
  • Cited studies suggest LLM outputs are less varied than human writing and reflect WEIRD cultural biases due to training data composition.
  • Group creativity may suffer with LLM use, and interacting with biased models can shift users’ opinions toward the model’s stance.
  • The team urges AI developers to intentionally diversify training data to preserve cognitive diversity and improve model reasoning.

Hottest takes

"My team leader only communicates to me using his LLM" — misterflibble
"Corporate dronespeak is no less homogeneous than AI writing" — adriand
"presages the advent of a second dark age" — rdevilla
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.