UK to force social media to remove abusive pics in 48 hours

UK says: delete abusive pics in 48 hours — commenters cry “too slow, too vague”

TLDR: UK plans to force platforms to delete non‑consensual intimate images within 48 hours or face huge fines and blocks. Commenters blast 48 hours as slow, fear vague definitions and censorship, and argue AI could do it in minutes, while others applaud stronger protection for victims.

The UK just told social media companies: take down “non‑consensual intimate images” within 48 hours or face monster fines and even UK blocks. The government wants Ofcom to treat these pics like the worst stuff (terror and child abuse), digitally tag them, and nuke reposts on sight. It follows outrage over Elon Musk’s chatbot Grok’s ability to spit out sexualized images; X is already under EU investigation. A top lawyer cheered the move but blasted the timer: why not 12 or 24 hours? Victims won’t have to chase every platform—report once, done.

And the comments? Absolute fireworks. Some cheer the crackdown, but the loudest voices say 48 hours is snail speed in the AI era. One user argues AI can flag and delete in minutes, but companies won’t pay the compute bill. Others fear the rule’s wording is mushy—“images” covers everything from deepfakes to drawings—and ask who decides what’s “abusive.” Free‑speech alarms ring too: one hot take warns this could hide embarrassing public‑interest images (think royals), while another drops the official gov link and says it’s really about revenge porn. Cue memes about “Ofcom playing Whac‑a‑Mole” and “ban the pixels.” The vibe: urgent problem, messy solution.

Key Points

  • The UK will amend the Crime and Policing Bill to require platforms to remove non-consensual intimate images within 48 hours of being flagged.
  • Non-compliance could lead to fines of up to 10% of qualifying worldwide income or service blocking in the UK.
  • Ofcom is considering digital marking of such images for automatic takedown and the offence will be prioritized under the Online Safety Act.
  • X faces an EU probe under the Digital Services Act related to Grok’s ability to generate explicit images; X reiterated its zero-tolerance stance.
  • ESET identified PromptSpy, an Android malware using prompts to Google’s Gemini to interpret UI and deploy a VNC module for remote control; its AI use is hardcoded and likely proof of concept.

Hottest takes

"It can and should be removed in minutes because AI can evaluate the ‘bad’ image quickly" — logankeenan
"They want this to apply to everything that can be represented visually even if it has nothing to do with reality" — superkuh
"This would have been used to stop the Epstein images of the former Prince Andrew from being viewed" — bArray
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.