February 11, 2026
Chip wars, comment chaos
GLM-5 was trained entirely on Huawei chips
Fans hail Huawei-only AI power; skeptics cry ‘sketchy site’
TLDR: GLM-5 is claimed to be trained entirely on Huawei chips, signaling a push for chip independence. The comments are split: some shout “RIP Nvidia” and say it performs well despite being slower, while others call the launch page unaffiliated or spam and doubt the training claim—so verification matters.
The internet just got a new AI cliffhanger: Zhipu AI says its GLM-5 mega-model was trained only on Huawei chips, a flex of tech independence. Fans rushed in with victory laps—one tester crowed RIP Nvidia, claiming GLM-5 feels as good as his daily go-to models, with only speed lagging slightly.
But the comment section went full detective mode. Several users blasted the launch page as sketchy, pointing out it’s not z.ai, the official site. One flagged a tiny disclaimer saying the page isn’t affiliated, while another insisted the official release only confirms Huawei for running the model, not necessarily training it. Translation: is this a breakthrough, or just spicy marketing?
Meanwhile, the numbers—“745 billion parameters” and “agentic intelligence” (think: bots that can plan steps and use tools)—had non-tech readers clutching their pearls. Hype met eye-rolls as skeptics shouted “spam website,” while early adopters countered with hands-on results: better writing, smarter reasoning, fewer code blunders than older GLMs.
The vibe: a geo-tech soap opera. If GLM-5 truly ditched U.S. chips, that’s a major milestone. If not, this is the most dramatic press-release reading of the week. Either way, the crowd is here for plot twists, memes, and the RIP Nvidia playlist.
Key Points
- •GLM-5 is a fifth-generation LLM by Zhipu AI with a 745B-parameter MoE architecture (256 experts, 8 activated per token, 44B active per inference).
- •The model was trained entirely on Huawei Ascend chips using the MindSpore framework, emphasizing independence from US hardware.
- •Zhipu AI raised approximately HKD 4.35B (USD $558M) in a Hong Kong IPO on January 8, 2026, funding GLM-5’s development.
- •GLM-5’s capabilities include creative generation, code assistance, multi-step reasoning, agentic workflows, and handling massive context windows.
- •Technical features include DeepSeek’s sparse attention (DSA) enabling up to 200K-token contexts, and specialized variants like GLM-Image and GLM-4.6V/4.5V.