February 26, 2026
Comment section goes DEFCON spicy
Statement from Dario Amodei on Our Discussions with the Department of War
Anthropic draws the line: no mass spying, no kill‑bots — and the comments explode
TLDR: Anthropic told the Pentagon it won’t support mass domestic surveillance or fully autonomous “kill-bots,” risking contracts over safety and civil liberties. Commenters split between praising a rare moral stand, calling it slick PR, and joking that the military might end up stuck with Grok—raising big stakes for how AI is used in war and at home.
Anthropic’s boss Dario Amodei just dropped a firebrand statement: the company backs U.S. defense and already runs its AI, Claude, inside classified networks, but it won’t help with mass domestic surveillance or fully autonomous weapons that pull the trigger without a human. The alleged flashpoint? The government (officially the Department of Defense, though Amodei calls it the “Department of War”) wants AI firms to allow “any lawful use,” which would mean removing those safeguards. Anthropic says no, even if it costs contracts, while boasting it cut off China‑linked clients and supported chip export controls.
The comment section turned into a mosh pit. One camp cheered the stand as a rare tech spine: “Big respect,” with users calling it a moral line in the sand. Another camp slammed it as PR theater, rolling eyes at the “we sacrificed revenue” flex and asking where the receipts are. And then came the memes: a panicked refrain that if Claude, GPT, and Gemini get iced, the military could end up with Grok—cue jokes about bargain‑bin battle bots and “boot camp for chatbots.”
Even the name choice sparked drama: pedants piled on that it’s called Defense, not War, while others shrugged and said, semantics aside, the real fight is over who controls how AI is used. It’s principle vs. pragmatism, with the internet doing what it does best: cheer, jeer, and meme it into a battlefield
Key Points
- •Anthropic claims first deployments of frontier AI models in U.S. classified networks and National Laboratories.
- •Claude is described as widely used for intelligence analysis, simulations, operational planning, and cyber operations.
- •Anthropic says it forwent several hundred million dollars by cutting off access to firms linked to the CCP and halted related cyberattacks.
- •The company excludes mass domestic surveillance and fully autonomous weapons from its contracts, citing privacy and reliability concerns.
- •The statement says the Department of War seeks contracts allowing “any lawful use” and removal of safeguards, with a truncated note of a threat to remove Anthropic.