March 4, 2026

Qwexit: Panic, Praise, and Popcorn

Something is afoot in the land of Qwen

Shock exits, CEO emergency meeting; comments split: 'RIP Qwen' vs 'They’ll rise again'

TLDR: Qwen’s lead researcher and several teammates suddenly quit, triggering an emergency CEO meeting as rumors of a reorg swirl. Commenters are split between doom memes and optimism, debating whether Qwen 3.5 is a breakthrough or a bust as some testers rave and others bash its coding behavior.

Qwen just dropped a stellar new AI series, but the real drama is behind the scenes: lead researcher Junyang Lin abruptly posted “me stepping down. bye my beloved qwen”—sending fans into meltdown. Alibaba’s CEO raced into an emergency all‑hands, and a respected Chinese outlet reports several other key team members also resigned. Rumors blame a reorg and a new boss from Google’s AI team—unconfirmed, but perfect fuel for the rumor mill. Meanwhile, Qwen 3.5’s “open weight” models (you can download and run them yourself) are being called some of the best yet.

The comments went full tabloid. One user crowned the moment with a meme‑y eulogy: “the qwen is dead, long live the qwen.” Others are begging the team to regroup elsewhere, with one noting this is the kind of research governments should be funding. Meanwhile, the product itself sparked a flame war: some testers call Qwen 3.5’s mid‑sized model “the most capable” coding helper they’ve tried, while another said it was “pretty bad,” complaining it started writing files from scratch instead of using basic tools.

So is this a breakup or a plot twist? The team’s lead posted again telling “Brothers of Qwen” to keep going, which only added mystery. For more drama, commenters are piling into this thread with popcorn in hand.

Key Points

  • Lead Qwen researcher Junyang Lin announced his resignation on X at 0:11 AM Beijing time on March 4.
  • Alibaba’s Tongyi Lab held an emergency all‑hands around 1:00 PM Beijing time the same day, addressed by CEO Wu Yongming.
  • 36Kr reports additional Qwen departures: Binyuan Hui, Bowen Yu, Kaixin Li, and other young researchers.
  • An unconfirmed detail suggests a reorganization placing a researcher from Google’s Gemini team in charge of Qwen.
  • Qwen 3.5 launched with a 397B model on Feb 17 (807GB) and added 122B, 35B, 27B, 9B, 4B, 2B, and 0.8B variants; the 2B is a reasoning, multimodal model as small as ~1.27GB quantized.

Hottest takes

"the qwen is dead, long live the qwen." — raffael_de
"the most capable agentic coding model I’ve tested at that size" — sosodev
"it just started writing all the files from scratch" — zoba
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.