April 1, 2026
One bit, big brawl
Salomi, a research repo on extreme low-bit transformer quantization
SALOMI says ‘1‑bit AI’ isn’t it; fans praise honesty, skeptics cry shovelware
TLDR: SALOMI’s own author says strict “1‑bit AI” doesn’t perform well, with better results needing a touch more than a single bit. Commenters split between applauding the candor and accusing the repo of hype‑surfing, turning a technical result into a showdown over honesty versus clout‑chasing.
Binary dreams met hard reality, and the comments went off. The SALOMI repo rolled in promising research on squeezing big AI models down to almost “ones and zeros.” But the headline from inside the project itself? Pure 1‑bit doesn’t hold up. The author, Edward9055, dropped a reality check: don’t trust pretty charts if the actual writing-and-reading performance tanks, and strict “one bit per setting” still falls short. Better results come with slightly more than a single bit and smarter tricks.
That honesty got applause from the “science over hype” crowd—especially with receipts like RESEARCH.md and the very on‑brand HONEST_ASSESSMENT.md. But the snark brigade showed up fast. User kevmo314 lobbed the first tomato: was this just AI‑assisted shovelware pushed out to piggyback a trendy release? Translation: cool repo, but is it clout‑chasing?
So the vibe split: team transparency saluted a rare “we tried, here’s where it breaks” post, while team side‑eye smelled timing and marketing. In between, folks joked about the “one‑bit wonder” becoming the “one‑bit blunder,” and whether compressing a brain to a coin flip was ever going to end well. In the end, SALOMI’s big message isn’t magic—it’s maturity: if you want real wins, you’ll need more than a single bit and a lot more honesty.
Key Points
- •SALOMI is a research repository focused on extreme low-bit transformer quantization and inference, with code, tests, and research artifacts.
- •The key conclusion is that strict 1.00 bpp post-hoc binary quantization is not a strong solution for GPT-2–class language modeling under rigorous evaluation.
- •More credible practical results concentrate around ~1.2–1.35 bpp using Hessian-guided VQ, mixed precision, or magnitude-recovery methods.
- •The repository emphasizes rigorous end-to-end evaluation and advises relying on curated documents (RESEARCH.md, docs, tests) over earlier, more optimistic drafts.
- •SALOMI is curated for public use with clear setup instructions, optional OpenCL backend support, and is released under the Apache-2.0 license.