April 10, 2026
Receipts or it didn’t answer
The tool that won't let AI say anything it can't cite
Promises receipts for every AI claim — users cry “yeah right”
TLDR: Grainulator claims to make AI only speak with sources and to block conflicting answers. Commenters dunked on demo errors, browser crashes, and “just prompts” vibes, sparking a fight over whether citation-locked AI is a breakthrough or simply clever prompt theater that still hallucinates.
A new Claude Code plugin called Grainulator is pitching a bold promise: an AI that won’t say anything it can’t cite. The tool runs “research sprints,” logs every statement as a claim, grades confidence with evidence tiers, and even blocks output if facts conflict. There’s a slick in-browser demo at grainulator.app and a GitHub repo that shows how it compiles a decision-ready brief.
But the community quickly went from intrigued to snarky. One commenter joked that since AI sometimes confuses itself with the user, it’ll just cite you — “you said X.” Another tried a “Car Wash” test and said it flubbed the reasoning and focused on emissions, quipping that if this magic were real, model makers would’ve shipped it already. Then came a facepalm: the demo answered “Who directed Scarface?” by crediting the 1932 film to Michael Curtiz instead of Howard Hawks — a high-profile flub for a tool about receipts.
More drama: someone said the site crashes Firefox, and another called the repo’s big promise “just prompts,” arguing the bot can still say anything it wants. Fans of the idea like the confidence scoring and refusal-to-answer vibe, but skeptics dub it “prompt theater.” The mood: receipts are great — if they’re real, relevant, and don’t crash your browser.
Key Points
- •Grainulator is a Claude Code plugin that orchestrates multi-pass research sprints using typed claims and graded evidence.
- •A seven-pass compiler scores confidence, detects conflicts and bias, and blocks output until conflicts are resolved.
- •Installation uses the Claude Plugin Marketplace; Node.js ≥ 20 is required for MCP servers run via npx.
- •Troubleshooting includes switching GitHub access from SSH to HTTPS to resolve permission errors, with manual clone as an alternative.
- •A PWA demo (grainulator.app) runs in-browser via WebLLM using SmolLM2-360M, offering local inference and a chat-based interface.