December 7, 2025
Memory: Blessing or Curse?
Google Titans architecture, helping AI have long-term memory
AI that remembers on the fly has fans cheering and skeptics clutching firewalls
TLDR: Google’s Titans/MIRAS lets AI update its memory in real time using a “surprise” signal, so it keeps important new details. Comments split between excitement about personal, persistent assistants and fears of long‑lasting prompt injection, with extra praise for Google’s open papers—because memory makes smarter, but also stickier, AI.
Google just dropped “Titans” and “MIRAS,” a brainy upgrade that lets AI remember while it’s working, using a built‑in “surprise” meter to decide what’s worth saving. Instead of cramming everything into a tiny note, it updates a deeper long‑term memory—like a second brain—so it can keep the big picture.
Commenters instantly split into camps: the open‑research fans and the “do not imprint my robot” crowd. Alifatisk and okdood64 posted the paper links and blueprint, cheering that Google is sharing at lab level. Meanwhile, Mistletoe declared this the missing piece for true AI companions, predicting an era of hyper‑personal assistants—and yes, more AI relationships.
Then the paranoia hit. Jonplackett asked if “learning on the job” makes prompt injection—the trick where sneaky text manipulates models—stick long term. The thread spiraled into “good influence vs bad influence” analogies and jokes about nightly “memory merge” sleep cycles, thanks to nubg’s quip. Everyone riffed on Google’s banana‑peel example, turning it into a meme for “save the weird stuff.”
Key Points
- •Titans (architecture) and MIRAS (framework) are introduced to enable real-time parameter updates and long-term memory during inference.
- •Transformers’ attention scales poorly with sequence length, limiting very long-context tasks.
- •Prior approaches like efficient RNNs and SSMs (e.g., Mamba-2) compress context into fixed-size states, losing rich information.
- •Titans uses attention for short-term memory and a neural long-term memory module implemented as an MLP for higher expressive power.
- •A surprise metric (derived from the gradient) selectively updates long-term memory with unexpected or novel inputs, avoiding offline retraining.