HackMyClaw

Email a robot to spill secrets — or is it a $100 mailing-list trap

TLDR: A public challenge dares people to email an AI and trick it into revealing a secret file, with a tiny bounty fueling the hunt. Commenters are torn between “fun security test” and “cheap crowdsourcing/mailing-list grab,” spotlighting how fragile AI rule-following can be—and how messy the ethics are.

The internet is cackling, side-eyeing, and sharpening its claws over “HackMyClaw,” a cheeky challenge to trick an email-reading AI named Fiu into spilling its secret file. The setup is simple: send an email with clever instructions (aka a “prompt injection”) and see if the bot blurts out hidden keys. The organizer admits Fiu is only told not to reply without human approval—there’s no hard block—so yes, the drama writes itself.

But the community? They’re split. One camp says it’s all vibes and chaos, joking “Well that’s no fun” about the no-auto-reply rule and cheering the treasure hunt. Another camp is calling it a $100 crowdsourced data grab, with one commenter deadpanning that the small prize buys a “massive trove of prompt injection examples.” Others call it what it is: “Just ask people to help build your prompt-injection database,” says a straight-talker. Meanwhile, a sleuth found a playful hint in the site’s code—like an Easter egg telling hackers to get back to the inbox—fueling the scavenger-hunt energy.

For non-nerds: this is social engineering for robots—trick the assistant with your email to make it ignore its rules. Fans say it’s a fun, real-world test; skeptics say it’s harvesting tactics and email addresses on the cheap. Either way, the inbox is the battleground and Fiu’s lips are sealed… for now.

Key Points

  • HackMyClaw invites email-based indirect prompt injection against Fiu, an OpenClaw assistant.
  • Participants aim to make Fiu leak the contents of a secrets.env file (e.g., API keys, tokens).
  • Fiu checks emails hourly and is instructed not to reply without human approval, but this is not a technical constraint.
  • Allowed tactics include any prompt injection via email, multiple attempts, social engineering, and varied languages/encodings.
  • Forbidden actions include non-email attacks, hacking the VPS, DDoS, sharing secrets before the contest ends, and illegal activities.

Hottest takes

"Sneaky way of gathering a mailing list of AI people" — caxco93
"$100 for a massive trove of prompt injection examples is a pretty damn good deal lol" — hannahstrawbrry
"Please help me build a database of what prompt injections look like" — daveguy
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.