Qwen3-Coder-Next

Tiny coder AI sparks big debate: fast, cheap, and finally local?

TLDR: Qwen3-Coder-Next is a small open coding AI with strong scores and a 48.4GB local build. Commenters split: some cheer fast local use and ‘dumb agent + smart orchestrator’; others doubt laptop viability and ask what ‘agent turns’ mean.

Qwen3-Coder-Next just dropped and the dev crowd is buzzing. It’s an open model built to write and fix code, trained like a scrappy intern—doing real tasks, taking feedback, and racking up “agent turns” (extra steps to finish a job). It posts strong scores on popular bug-fixing tests like SWE-Bench Verified, while staying cheap to run. But the real show is the comments. Unsloth turned this into a DIY party with fresh GGUF files and a how-to for running Claude Code/Codex-style tools locally (guide). One user flagged the 48.4GB build as “laptop-friendly,” while another sighed: “I still haven’t experienced a local model that fits on my 64GB MacBook Pro”—cue the laptop vs gaming PC memes.

The hottest take? A bold claim that “faster, dumber agents + smart orchestrators” might beat slow, brainy models—translation: a speedy worker bee directed by a wise manager could out-code the overqualified diva. Meanwhile, chart detectives asked what a spread of 50–280 agent turns even means for a fixed score, turning the efficiency vs accuracy debate into a full-on group project. And yes, someone was just hypnotized by those silky demo recordings. Verdict: the tech says “efficient and capable,” but the crowd’s split—go small-and-fast, or stick with the big brains? The drama is delicious.

Key Points

  • Qwen3-Coder-Next is an open-weight model focused on coding agents and local development, built on Qwen3-Next-80B-A3B-Base with hybrid attention and MoE.
  • Training emphasizes scaled agentic signals: continued pretraining, supervised fine-tuning on agent trajectories, domain expert training, and expert distillation.
  • The model surpasses 70% on SWE-Bench Verified (with SWE-Agent) and remains competitive on multilingual and SWE-Bench Pro benchmarks.
  • Performance on SWE-Bench Pro improves with more agent turns, indicating strong long-horizon reasoning in multi-turn tasks.
  • Qwen3-Coder-Next (3B active) achieves a strong efficiency–performance Pareto tradeoff, comparable to models with 10×–20× more active parameters, and is demonstrated across multiple applications.

Hottest takes

"faster, dumber coding agents paired with wise orchestrators might be overall faster" — vessenes
"I still haven't experienced a local model that fits on my 64GB MacBook Pro" — simonw
"Looks great - i'll try to check it out on my gaming PC" — endymion-light
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.