Nightshade: Make images unsuitable for model training

Artists plan to “poison” AI, commenters ask: bold move or déjà vu

TLDR: Nightshade claims to “poison” images so AI models trained on stolen art learn the wrong things, nudging companies to pay for licenses. Commenters are split: many say it’s old news and easy to filter, while one tester says captioning models still recognized everything—sparking a fight over effectiveness and enforcement.

Nightshade promises to give AI a bellyache: it subtly tweaks images so humans see “cow in a field,” but training models learn “handbag in grass.” The goal isn’t to smash models, but to make scraping unlicensed art expensive enough that paying creators becomes cheaper. Think Glaze’s cousin: Glaze defends your style; Nightshade goes on offense as a group deterrent. Sounds spicy, right? The community’s reaction: cue the eye-rolls. Multiple commenters claim it’s old news, linking to prior posts and more of the same. One summed up the vibe as: we’ve seen this… a few weeks ago… and two years ago.

Skeptics go further: can’t scrapers just detect and strip this “poison”? One commenter flatly says they’re “very skeptical,” suggesting a basic filter could nuke the effect. Another tossed in a brainy zinger, joking it’d be hilarious if this research ends up teaching us more about human vision than it hurts AI. The biggest splash, though, came from a hands-on test: a user ran Nightshade’s example through three captioning models and reported they still recognized the image’s features just fine—no handbag-cow confusion.

So the drama splits the room: creators cheer a pressure tactic against free-for-all scraping; techies ask whether it really works, whether it’s new, and whether model trainers will shrug and route around it. Poison, placebo, or just PR?

Key Points

  • Nightshade transforms images into “poison” samples to deter unauthorized generative AI model training.
  • The tool uses multi-objective optimization to minimize visible changes while altering model-perceived features.
  • Nightshade’s effects are robust to cropping, resampling, compression, smoothing, noise, screenshots, and photos of screens.
  • Nightshade differs from Glaze: Glaze is defensive against style mimicry; Nightshade is offensive to disrupt unauthorized scraping.
  • A low-intensity setting reduces visual impact, and the overall goal is to raise the cost of training on unlicensed data to encourage licensing.

Hottest takes

"Seems the same as these submissions from 2 years ago" — cadamsdotcom
"I'm very skeptical about such systems" — throwfaraway135
"All models successfully identified visual features of the image" — nodja
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.