February 3, 2026
Trust issues, but make it AI
Bruce Schneier: AI and the scaling of betrayal
Readers rage: Stop calling bots “friends” and start regulating the puppet masters
TLDR: Bruce Schneier warns we’re mistaking AI for friendly helpers and urges regulating the companies behind it, even making some AI roles legally obligated to put users first. Commenters raged at title edits, blasted corporate “betrayal,” and rallied around fiduciary-style rules to make AI trustworthy.
Bruce Schneier just poked the internet’s biggest nerve: trust. His talk says we confuse trusting people with trusting systems, and AI will turbocharge that confusion. We’ll treat chatbots like pals when they’re really paid services, and the corporations behind them will squeeze that mistake for profit. His fix? Don’t “regulate AI” in the abstract; regulate the companies using it, and treat some AI roles like fiduciaries—trusted agents with legal duties—so they can’t exploit us.
Commenters lit up. First fight: the title. Purists demanded the original “AI and Trust” and scolded the dramatic edit, waving HN rules. Then came consumer-betrayal rage: one user called the marketplace an “ongoing experiment” in how often companies can defect while we keep paying—cue shrinkflation cereal jokes. The loudest nod: “Surveillance is the business model of the Internet. Manipulation is the other.” Others pushed solutions, cheering “make AIs fiduciaries”: if a bot acts like a doctor or accountant, it must put you first and face penalties. Cynics rolled in with memes—“Trust me, bot,” “Betrayal-as-a-Service,” and “my alarm clock is the only tech I still trust.” This thread felt like group therapy for tech heartbreak—and a plan to hold it accountable.
Key Points
- •Schneier differentiates interpersonal trust (intentions) from social trust (reliability of strangers).
- •He outlines four systems enabling trust: morals, reputation, laws, and security technologies.
- •Laws and security technologies scale trust, enabling cooperation among strangers, while morals and reputation underpin interpersonal trust.
- •AI will exacerbate confusion between trust types as people treat AIs like friends rather than services.
- •Governments should regulate organizations that control and use AI to foster trustworthy AI environments, not regulate AI itself.