February 16, 2026
Works on my SELECT
Building SQLite with a small swarm
AI swarm builds a mini database—commenters ask if it even works
TLDR: An engineer used three AI models to assemble a SQLite-like Rust database with 19k lines and 282 passing tests. Commenters cheered the experiment but challenged the tiny test coverage, demanding real benchmarks and SQLite-grade tests, turning it into a debate over flashy AI workflows versus proven, production-ready results.
An engineer unleashed a squad of AI coders—Claude, Codex, and Gemini—to stitch together a mini-SQLite clone in Rust, bragging 19k lines, a parser-to-storage pipeline, and 282 tests “all passing.” Cue the comments: the crowd’s split between wow and whoa, slow down. Skeptics zeroed in on the test story. One top reply asked, “Did they pass all unit tests in the end?” while another snarked that the oracle checks boil down to “three trivial SELECT statements,” pointing to SQLite’s famously brutal test suite [https://sqlite.org/testing.html]. Translation: real databases aren’t proven by three queries and a dream.
Others wanted substance over spectacle: performance numbers, trade-offs, and why this AI-assembly line is better than one focused human. The repo’s heavy lock files and coordination were treated like reality TV—half the commits were wrangling agents—sparking jokes about “AI interns fighting over a task board.” There were memes too: “Works on my SELECT,” and “coalescer only ran once” became punchlines about duplication chaos.
Fans argued the process is the product: tight modules, fast feedback, and sqlite-as-oracle show multi-agent coding can ship. Critics called it a demo until it survives real tests. Verdict from the comments: promising experiment, unfinished proof.
Key Points
- •A SQLite-like database engine in Rust was built by coordinating Claude, Codex, and Gemini, yielding ~19k lines of code and 282 passing unit tests.
- •The system includes parser, planner, volcano executor, pager, B+ trees, WAL, recovery, joins, aggregates, indexing, transaction semantics, grouped aggregates, and stats-aware planning.
- •Workflow: one Claude bootstrap created the skeleton and tests; six agents (2x each model) iterated by claiming tasks, testing against sqlite3, and pushing updates.
- •Coordination overhead was high: 84 of 154 commits (54.5%) were related to lock/claim/stale-lock/release management.
- •Replication instructions and prerequisites are provided; limitations include documentation sprawl, lack of unified token usage tracking, and coalescer underuse mid-run.