March 17, 2026
Your feed runs on fury
Meta, TikTok let harmful content rise after evidence outrage drove engagement
Outrage sells, kids suffer—commenters ask: are we finally going to fix this
TLDR: Whistleblowers say Meta and TikTok let more “borderline” content through because outrage boosts engagement and political relationships. Commenters aren’t surprised—some call it an unavoidable arms race, others demand action—asking, bluntly, what we’re going to do now and why safety keeps losing to growth
Whistleblowers just told the BBC that Meta and TikTok let more harmful, “borderline” content slide because outrage drives clicks—and the comments section came in hot. The top vibe? We’ve known this for years—now what? One user pointed to The Social Dilemma and basically begged, “Are we going to do anything about it?” Another admitted they try to avoid outrage-bait but called it a “societal ill” bigger than any single user.
The details are messy. A Meta engineer says bosses okayed more borderline content (think misogyny and conspiracy teasers) to compete with TikTok because the “stock price is down.” Over at TikTok, a staffer claims the company prioritized complaints from politicians over reports of harmful posts featuring children to keep regulators sweet. Meanwhile, Instagram Reels allegedly launched without proper safeguards; internal research showed more bullying and hate in Reels comments, even as Meta poured resources into growth and denied a few headcount to safety teams. Meta and TikTok both deny it, calling the claims wrong or “fabricated.”
Cue the community split: some say this is a platform arms race—if one feeds fury, the rest must follow to survive. Others want regulation or real consequences. And then there’s the gallows humor: one deadpan reply—“Drugs.”—when asked how to cope with the outrage machine. As one ex-TikTok engineer put it, these algorithms are a black box; users just see the end result. The crowd’s verdict: we’re not shocked—but we’re exhausted. Read the full BBC report here: Inside the Rage Machine
Key Points
- •Whistleblowers say Meta and TikTok allowed more 'borderline' harmful content to boost engagement and compete for users.
- •A Meta engineer alleges management directed teams to surface more borderline content, citing stock price pressures.
- •Internal research shows Instagram Reels comments had higher rates of bullying, hate speech, and violence/incitement than elsewhere on Instagram.
- •A TikTok staffer said cases involving politicians were prioritized over reports of harmful posts featuring children to avoid regulatory risks.
- •Meta and TikTok deny deliberately amplifying harmful content; TikTok says claims are fabricated and cites investments in safety technology.