February 7, 2026
Bots clock in, humans clock out
Software Factories and the Agentic Moment
No‑Human Coding Factory Debuts; Crowd Split Between Wow and “Fix Your Site”
TLDR: StrongDM claims it built a no‑human “software factory” where AI writes and tests code, sparking big buzz. Commenters split between excitement over the “dark factory” future and snark about the project’s own slow, glitchy site, while an insider jumped in to field questions—setting off hype and skepticism alike.
Robots wrote the code and the internet had feelings. StrongDM says it built a “software factory” where AI agents write and check code with no humans touching or reviewing it, guided by plain‑English “scenarios” and tested in a “Digital Twin” world that mimics tools like Slack and Google Docs. There’s even a cheeky rule: if you’re not spending $1,000 a day on AI tokens per engineer, you’re not trying.
Cue the comment section chaos. One camp is hyped, pointing to the “dark factory” vision where software basically grows itself—even veteran blogger Simon Willison popped in to say, yes, this is the stealth team he hinted at and dropped more thoughts here. Another camp? roasting the rollout. Users complained the site factory.strongdm.ai was slow, glitchy, and “fails on aesthetics and accessibility.” One commenter even hit a blunt error message and posted it like a trophy. The vibe: “Cool story—now make your own website work.”
Then the plot twist: a team member showed up—fresh out of college—saying it’s been a “wild ride” and offering to answer questions. Between believers chanting “non‑interactive development” and skeptics yelling “ship quality first,” the meme of the day wrote itself: bots clock in, humans clock out… and then the site crashes.
Key Points
- •StrongDM describes a non-interactive “Software Factory” where agents write and validate code from specs and scenarios, with no human coding or code review.
- •The StrongDM AI team was founded on July 14, 2025 by Justin McCarthy (co-founder, CTO), Jay Taylor, and Navan Chauhan to pursue this approach.
- •A key catalyst was Anthropic’s Claude 3.5 (October 2024) and Cursor’s YOLO mode (December 2024), which improved long-horizon agentic coding.
- •The team replaced traditional tests with external “scenarios” and a probabilistic “satisfaction” metric, using LLMs as judges to validate behavior.
- •They built a “Digital Twin Universe” cloning services like Okta, Jira, Slack, Google Docs/Drive/Sheets to provide robust, less gameable validation.