May 3, 2026

Random? The comments say otherwise

Learning Pseudorandom Numbers with Transformers

AI learns to guess “random” numbers and commenters are already yelling security nightmare

TLDR: Researchers showed an AI can learn and predict some number sequences designed to seem random, which matters because those systems are used throughout computing. The comment section instantly went full panic mode, with readers framing it as a possible security nightmare.

A fresh research paper just dropped a very unsettling party trick: Transformer AI models can learn to predict numbers that are supposed to look random. These number streams come from tools commonly used inside software systems to generate unpredictability, and the researchers say the model kept getting better even when the clues were tiny — in one case, down to a single bit, basically the smallest yes/no-style crumb of information possible. They also found the AI could learn several different random-number systems at once and improve with a training “curriculum,” which is science-speak for start easy, then get scary.

But the real fireworks came from the peanut gallery. The loudest reaction was immediate, blunt, and gloriously alarmist: “Uh this is apocalyptic for computer security, no?” That one line basically sets the mood for the entire discussion. The vibe is equal parts fascinated and horrified, like watching someone teach a parrot to pick locks. Even without a huge comment war in the excerpt, you can already feel the internet reaching for the big red panic button: if AI can spot patterns in stuff humans treat as random, people instantly jump to passwords, encryption, and digital chaos. It’s the classic tech-drama recipe: one research paper, one chilling takeaway, and one comment that turns the whole thing into a disaster movie trailer. Is this a niche math result, or the opening scene of a security meltdown? The community, so far, is very much voting for maximum dread.

Key Points

  • The article reports that Transformer models can perform in-context prediction on unseen sequences from diverse Permuted Congruential Generator variants.
  • PCGs are presented as harder than linear congruential generators because they apply bit-wise shifts, XORs, rotations, and truncations to hidden state.
  • The experiments scale to models with up to one million parameters and datasets with up to one billion tokens.
  • The study finds that prediction remains reliable even when generator output is truncated to a single bit, and that multiple distinct PRNGs can be learned jointly during training.
  • For larger moduli, optimization shows long stagnation phases, and the reported experiments indicate that curriculum learning from smaller to larger moduli is necessary.

Hottest takes

"apocalyptic for computer security" — dTal
"Uh this is apocalyptic" — dTal
"no?" — dTal
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.