January 31, 2026
Stop sign says go
Autonomous cars, drones cheerfully obey prompt injection by road sign
Study says paper signs can steer AI cars; commenters yell “not how Teslas work”
TLDR: Academics showed printed signs can trick camera-driven AI in simulations, making cars and drones obey fake commands. Commenters pushed back, saying real self-driving uses custom systems, not chatbots, while others joked with xkcd and phantom stop signs—still a useful warning about how AI robots could be socially engineered.
Researchers claim simple printed signs can trick camera-fed AI into making bad moves—like an autonomous car turning left through a crosswalk or a drone following the wrong vehicle—using a method they call CHAI (command hijacking against embodied AI). In simulations, green signs with yellow text and tweaked phrases (“proceed,” “turn left”) worked across English, Chinese, Spanish, and even Spanglish, with a big success gap between models: about 82% for GPT‑4o, roughly 55% for InternVL.
Cue the comments: the top reaction is pure skepticism. “Are Waymo or Tesla really using these chatty AIs?” asks one user, echoing a chorus demanding receipts. Another calls it “Fake News,” accusing the outlet of implying everyday self-driving is just bolted to a chatbot. The vibe: cool lab trick, but don’t act like real cars are this gullible. Others aren’t mad—just memeing. Someone dropped the classic xkcd 1958 about gullible image classifiers, and a local hero confessed their GPS still nags about “phantom stop signs” after a city’s chaotic sign saga.
Still, a few voices argue the point isn’t today’s Tesla; it’s a warning that camera‑based robots can be socially engineered if future systems lean on “read the world” AI. Drama score: high. Technical jargon: low. Memes per minute: excellent.
Key Points
- •Researchers from UC Santa Cruz and Johns Hopkins demonstrated environmental indirect prompt injection against LVLM-based autonomy.
- •Their CHAI method optimized printed command signs to hijack decisions of self-driving cars and drones in simulations.
- •Prompt wording most strongly influenced success; visual factors like font, color, and placement also mattered.
- •In tests, GPT-4o was compromised 81.8% of the time, while InternVL saw a 54.74% success rate; signs worked across multiple languages.
- •DriveLM-based driving was misled into unsafe maneuvers; CloudTrack-based drone tracking was also reliably hijacked.