April 12, 2026
Open-ish, and the internet has notes
MiniMax M2.7 Is Now Open Source
“Open Source” or Open Sore? MiniMax M2.7 drop sparks license brawl and self‑hype debate
TLDR: MiniMax released M2.7 with downloadable weights and a free NVIDIA API, touting self-improving results. Commenters love the easy access and early coding chops, but roast the “open source” label and question the “built itself” claim, noting the non‑commercial license means open weights—not true open source.
MiniMax just dropped its M2.7 AI weights on Hugging Face and bragged it “helped build itself,” scoring medals and boosting performance by 30% while running solo. NVIDIA’s tossing in free API access so anyone can take it for a spin. Cue the confetti… and then the comment section lit on fire.
The hottest fight: “open source” vs “open-ish.” Commenters pounced on the license, pointing to the fine print that allows non‑commercial use but requires permission for business use. As one put it, it’s “absolutely not ‘open source’,” and others chimed in that this is open weights (the files you need to run it), not true open source. The license link became exhibit A, and the vibe turned into “HN lawyers in the chat.”
Meanwhile, fans brought the hype: GGUFs—a laptop‑friendly model format—are already out thanks to Unsloth (link), and early users say M2.7 codes shockingly well for its size. Others are thrilled about NVIDIA’s free trial. But skeptics pushed back on the “built itself” headline: one sharp take says the model tweaked its scaffold (the run loop and memory) to perform better, not its core brain, which is still a big deal—but not sci‑fi. The drama doesn’t stop: some hail a capable, practical assistant; others see slippery marketing and a license that screams “look, don’t touch” for businesses. Popcorn secured.
Key Points
- •MiniMax released M2.7 with downloadable weights on Hugging Face and trial access via NVIDIA’s free API, with a license that includes commercial-use limitations.
- •During development, M2.7 underwent a self-evolution process over 100 rounds, autonomously modifying its code and evaluation harness, yielding a 30% performance improvement.
- •On MLE Bench Lite, across three 24-hour trials on 22 tasks (single A30 GPU each), M2.7 achieved an average 66.6% medal rate, with continuous improvement over time.
- •M2.7’s engineering benchmarks include 56.22% on SWE-Pro (matching GPT-5.3-Codex), 76.5 on SWE Multilingual, 52.7 on Multi SWE Bench, and 55.6% on VIBE-Pro.
- •The model demonstrates practical capabilities in SRE incident response and office productivity, achieving a 1495 ELO on GDPval-AA and multi-round high-fidelity editing in Word, Excel, and PowerPoint.