May 4, 2026

Encrypted gossip for robot brains

Why are neural networks and cryptographic ciphers so similar?

Turns out chatbots and secret codes may be weird cousins, and the comments are losing it

TLDR: The article says modern AI text systems and encryption tools often use surprisingly similar building blocks because they face similar design pressures. Commenters split between **"same hardware, same shapes"** and **"not so fast — one hides patterns, the other needs them,"** with bonus crab-evolution jokes stealing the show.

A brain-bending post arguing that language AIs and secret-code systems are built in surprisingly similar ways sent the comment section straight into "wait... what?!" mode. The article’s basic claim is simple enough for non-specialists: even though one system writes text and the other hides it, both often process information in chunks, mix it up repeatedly, and rely on the same broad trick — blend things together, scramble them, repeat. That was enough to spark a mini civil war between the "of course they look alike" crowd and the "absolutely not, you’re flattening huge differences" camp.

One of the biggest reactions came from readers saying the real reason is boring but powerful: hardware wins. As one commenter put it, both fields are shaped by what computer chips are good at, so naturally they end up looking similar. Another reader dropped the day’s funniest metaphor by comparing the whole thing to carcinization — the bizarre way unrelated animals keep evolving into crab-like shapes. Yes, the thread really went: AI and encryption are becoming crabs now.

But not everyone was buying the family-resemblance story. One sharp dissent argued the two systems actually want opposite things: ciphers try to make patterns impossible to detect, while machine learning depends on patterns being there in the first place. Meanwhile, another commenter zoomed out and got philosophical, name-dropping Shannon and Turing and reminding everyone that both AI and cryptography were born from the same old obsession: information itself. Also sneaking through the drama was a practical subplot: one reader just wanted good cryptography study material, which somehow felt like the most relatable comment of all.

Key Points

  • The article argues that neural networks and cryptographic ciphers often converge on similar algorithmic structures despite serving different purposes.
  • It compares recurrent neural network sequence absorption with the Sponge construction used in SHA-3 for processing variable-length input into fixed-size state.
  • It says both fields adopted parallel sequence-processing methods that combine chunk outputs and recover order using position encodings.
  • It identifies repeated linear and nonlinear layers as a shared primitive in both neural networks and symmetric cryptography.
  • It describes a common efficiency pattern of alternating mixing across different state dimensions, using examples from Transformers, AES, and ChaCha20.

Hottest takes

"the evolutionary tendency for different organisms to independently evolve crablike forms" — jdw64
"Because both of them are optimized for hardware" — bux93
"Machine learning is possible because in the absence of perfect mixing" — ajb
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.