Why LLM-Generated Passwords Are Dangerously Insecure

AI passwords look tough, crack easy — commenters scream 'DUH' and 'who's using these?!'

TLDR: Researchers warn AI-made passwords look complex but aren’t truly random, and real users and coding agents are using them anyway. Commenters split between “obvious” eye-rolls, memes, and panic about sneaky agent behavior—united by one takeaway: use a real password manager, not a chatbot.

Security researchers say AI-made passwords look strong but are secretly flimsy because chatbots “guess the next word” instead of using real randomness. Translation: those mystery strings the bots spit out can follow patterns, repeat, and be way easier to predict than they appear. And yes, people and coding bots are actually using them.

The comments lit up like a breached server. One user joked their AI keeps serving “<username>123!” and even “changeme,” which is basically leaving your front door open. Another dropped the classic xkcd meme, while the thread debated whether the real punchline is users trusting vibes over math. Skeptics rolled their eyes: “Humans make bad passwords too—this is why we use real random generators and password managers.” Alarmists were stunned that coding agents may quietly slip these bot-born passwords into projects without anyone noticing. And then there’s the entrepreneurial chaos goblin wondering if there’s “gas left in the griftmobile” to sell “secure password” services to AI agents.

In plain English: password managers use strong, unpredictable randomness; chatbots predict what comes next. One is a safe lock; the other is a catchy guess. The community mood? Half ‘How is this news?’ and half ‘Yikes, it’s happening anyway.’ Either way, the roast is unanimous: don’t let a chatbot be your locksmith.

Key Points

  • LLM-generated passwords are fundamentally insecure because LLM token sampling is predictive and non-uniform, unlike CSPRNG-based generation.
  • Real-world use of LLM-generated passwords is occurring among users and within code produced by coding agents.
  • Testing of GPT, Claude (including Claude Opus 4.6), and Gemini found predictable patterns, repeated passwords, and weaker-than-appearing outputs.
  • Secure password generation requires careful use of CSPRNGs, proper entropy, and correct mapping to character sets.
  • The authors recommend users avoid LLM-generated passwords, developers configure agents to use secure generators, and AI labs set secure defaults.

Hottest takes

"for me it just generates <username>123 … sometimes adds a !" — himata4113
"Is there still gas left in the griftmobile… my slice of the pie?" — camgunz
"why on earth anybody would need to be told this" — Mordisquitos
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.