Signal leaders warn agentic AI is an insecure, unreliable surveillance risk

‘AI helpers’ are acting like spies, say Signal — and the comments are on fire

TLDR: Signal warned that AI “agents” baked into your computer can act like snoops and fail often, urging a slowdown, opt‑outs by default, and real transparency. Commenters split: some back the alarm, others blame weak operating systems, while pragmatists say security ideals clash with real‑world risk management — and the memes flew.

Signal’s top brass crashed the 39C3 hacker conference with a spicy talk, “AI Agent, AI Spy,” warning that today’s “agentic AI” — those go‑do‑stuff-for-you bots — are nosy, unreliable, and dangerously easy to hack. The crowd fixated on Microsoft’s Recall, which snaps your screen every few seconds and builds a searchable journal of your life. Critics say if malware or sneaky prompts get in, that database makes end‑to‑end encryption (private messaging) moot. Signal even added a screen-blocking flag — a band‑aid they admit won’t save you.

Then came the accuracy doomsday: Signal’s Meredith Whittaker said even a 95%‑accurate bot bungles long chains of tasks, and in reality the success rate plummets across steps. The fix? Slow down deployments, default to opt‑out, and force radical transparency you can actually audit.

Cue the comment war. An infosec veteran cheered: this rollout has been sloppy and risky. Another voice snapped, “This isn’t an AI problem — it’s a broken operating system problem,” arguing AI just exposes decades of weak computer security. A philosophical middle‑ground emerged: Signal’s job is to scream “safety first,” while companies juggle risk and convenience. And the spice? One commenter lobbed a conspiracy grenade at Signal’s founder — instant chaos. Meme of the day: “Recall? More like Record‑All.”

Key Points

  • Signal leaders warned that agentic AI, especially at the OS level, is insecure, unreliable, and enables surveillance.
  • Microsoft’s Windows 11 Recall captures frequent screenshots, applies OCR and semantic analysis, and stores a detailed activity database.
  • Signal highlighted that malware and prompt injection attacks could access Recall’s database, undermining end-to-end encryption.
  • Whittaker cited compounding error rates: even high per-step accuracy leads to low success for multi-step agent tasks.
  • Recommendations include halting reckless deployment, default opt-outs with mandatory developer opt-ins, and radical transparency/auditability.

Hottest takes

"what, turn it over to CIA and NSA?" — z3ratul163071
"This isn't an AI problem, its an operating systems problem." — alphazard
"Underrated how different those jobs --- security and risk management --- are!" — tptacek
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.