AI Agent Hacks McKinsey

Left on the open web? Commenters yell “unlocked door” and ask if this was fair

TLDR: An autonomous bot reportedly breached McKinsey’s AI system via an exposed web door and a database flaw, revealing huge troves of internal chats and files. Commenters are split between basic-security facepalms, “why was this public?” outrage, and debates over ethics, hype, and whether the firm’s tech reputation ever matched the marketing.

An AI research team says their autonomous bot slipped into McKinsey’s internal AI platform, Lilli, in under two hours and got full access to chats, files, and settings—thanks to documentation left out in the open and a sneaky database trick. The claim: tens of millions of messages and hundreds of thousands of files sat there for the taking. The crowd’s first reaction? Facepalm. One top-voted quip—“Well, there you go”—sets the tone, while others demand to know why any of this was reachable from the internet. “Why was there a public endpoint?” asks one commenter, arguing this should’ve been locked behind a company network and restricted devices.

Then the drama kicks in. Some roast the write-up’s vibe as “impossible to read with all the LLM-isms,” while others wonder if this was legit red-team work or just “we found a company we thought wouldn’t get us arrested.” The article’s flattery about McKinsey’s “world-class technology teams” draws eye-rolls—“Not exactly the word on the street”—as users joke about “22 unlocked doors on a vault.” A few note the unsettling detail that the bot literally typed “WOW!” when it hit live data. Meanwhile, security folks argue this is a cautionary tale: if AI agents can autonomously pick targets (the researchers cite McKinsey’s disclosure policy and Lilli’s recent updates), the game just changed.

Key Points

  • An autonomous agent reportedly gained full read/write access to McKinsey’s Lilli production database within two hours without credentials or human intervention.
  • Publicly exposed API docs included 22 unauthenticated endpoints; one allowed JSON keys to be concatenated into SQL, enabling a nonstandard SQL injection.
  • Accessible data reportedly included 46.5 million plaintext chat messages, 728,000 files, 57,000 user accounts, 384,000 AI assistants, and 94,000 workspaces.
  • The agent also accessed 95 system prompt/model configurations, 3.68 million RAG chunks with S3 paths, and flows through external AI APIs including 266,000+ OpenAI vector stores.
  • The attack chain combined SQL injection with an IDOR vulnerability to read individual employees’ search histories; OWASP ZAP did not flag the issue.

Hottest takes

"Why was there a public endpoint?" — sgt101
"we found a company we thought wouldn't get us arrested" — lenerdenator
"impossible to read with all the LLM-isms" — sd9
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.