February 11, 2026
Vending boss or just gloss?
GLM-5: From Vibe Coding to Agentic Engineering
Open‑source GLM‑5 flexes work‑bot muscles; hype vs “ARC? speed? price?”
TLDR: GLM‑5 launches as an open‑source “work bot,” touting cheaper ops, new training tech, and a simulated vending‑business win. The community cheers the open MIT release and doc‑making agents but grills the team on ARC‑AGI results, real‑world speed, pricing, and whether a consumer‑friendly “Air” model is coming.
GLM‑5 just dropped claiming “from chat to work” status: bigger brains, a new “slime” training system, and a win on a simulated one‑year vending‑machine business test with a $4,432 finish. It’s open‑sourced under MIT, promises cheaper operation with DeepSeek Sparse Attention, and brags it’s closing in on premium rivals like Claude Opus 4.5. The crowd? Split. Some are crowning a new open‑source boss; others are clutching their wallets and stopwatch apps.
The hottest thread is pure popcorn: one user fires, “why no ARC‑AGI?”—a tough independent test—while another reads the comparison to Claude Opus (and not Sonnet) as a power move and begs for a GLM‑5 Air that runs on home rigs. A burned‑before user says GLM‑4.7 was “slow through z.ai” but is ready to retry. The practical chorus asks, “Is it cheaper than Claude or ChatGPT?” The company hints yes on compute, but hard numbers aren’t here yet. Fans cheer the Hugging Face release, the Z.ai Agent mode that spits out Word/Excel/PDFs, and that vending‑sim crown—cue memes about “MBA in vending” and “Slime time.” Skeptics counter with real‑world speed and eval receipts, linking a related thread. Verdict: open‑source hype vs. day‑to‑day reality, round one.
Key Points
- •GLM-5 scales to 744B parameters (40B active) from 355B (32B active) and increases pre-training data to 28.5T tokens from 23T.
- •It integrates DeepSeek Sparse Attention (DSA) to reduce deployment cost while maintaining long-context capacity.
- •A new asynchronous RL infrastructure, “slime,” improves training throughput and enables more fine-grained post-training iterations.
- •GLM-5 outperforms GLM-4.7 on internal CC-Bench-V2 and ranks #1 among open-source models on Vending Bench 2 with a $4,432 final balance, nearing Claude Opus 4.5.
- •GLM-5 is open-sourced under the MIT License on Hugging Face and ModelScope, available via api.z.ai and BigModel.cn, compatible with Claude Code and OpenClaw, and supports generating ready-to-use .docx, .pdf, and .xlsx outputs.