April 16, 2026
Open code, closed doors, hot comment wars
Discourse Is Not Going Closed Source
Discourse stays open; Cal.com goes closed — cries of 'bad faith' and a JS detox dream
TLDR: Discourse vows to keep its code open while Cal.com closes up over AI fears. Commenters pile on: accusations of “bad faith,” a “less JavaScript” security meme, and backlash against secret AI tools, with many insisting openness means more defenders, faster fixes, and fewer excuses.
Open-source drama alert: Discourse just told the world they’re not going closed source—code stays public—while Cal.com slams its repo shut, blaming super-fast AI hacks. Discourse’s stance: hiding code won’t save you; it just locks out the good guys. They even say they used advanced AI tools to find and fix a pile of bugs in their own open code, and point to OpenAI and Anthropic rolling out cyber-focused models carefully. OpenAI claims one tool scanned over a million code updates and found tons of serious issues—yikes. The comments? Absolutely sizzling.
One top reply yells “bad faith” at Cal.com’s security story, calling it a business move in a spooky costume. Another kicks off a memeable “JS detox”: ditch heavy single‑page apps (one giant page stuffed with JavaScript) and go back to simple pages and forms “in the name of security.” Others torch AI giants for hoarding secret super-bots behind whitelists—keeping defenders out while attackers grind on. Meanwhile, open‑source fans cheer the “useful urgency” of public code: more eyes, faster fixes. And of course, a drive-by snark complains Discourse makes you use an email—peak comment-section energy. The vibe: openness brings allies, and closing up won’t hide you from AI or the browser; the real fight is who gets the tools—everyone, or a gated VIP list OpenAI Anthropic.
Key Points
- •Discourse affirms it will remain open source under GPLv2 and rejects closing its codebase in response to AI-driven security concerns.
- •The article argues closing source is ineffective for SaaS security because client-side code and APIs remain inspectable and AI can analyze binaries.
- •Discourse reports using AI models (GPT-5.3 Codex, GPT-5.4, Claude Opus 4.6) to find and fix many latent security issues in its open-source codebase.
- •OpenAI and Anthropic are described as cautious about AI security vectors, with tools like GPT-5.4-Cyber and Anthropic Mythos being rolled out carefully.
- •OpenAI’s Codex Security reportedly scanned 1.2 million commits in 30 days, identifying 792 critical and 10,561 high-severity issues, highlighting AI’s speed in vulnerability discovery.