January 6, 2026

AI-coded tree ignites C++ flamewar

High-performance header-only container library for C++23 on x86-64

Dev drops turbo C++ library; AI helped build it, comments explode

TLDR: A new C++ library claims big speed gains on large data by using smart memory tricks and chip features. Commenters are split between hype over the 2–5x numbers and skepticism about Linux-only benchmarks, AI-assisted coding, and whether in-memory B+trees matter outside massive workloads.

Move over, slow data structures: a new C++ library promises jaw‑dropping speed, claiming “2–5× faster” than popular options by using big memory pages (“hugepages”), special chip tricks (SIMD: doing many comparisons at once), and tuned layouts. The repo is live on GitHub and the comments went full nitro.

Speed fans cheered as mattgodbolt spotlighted the headline claim, while skeptics instantly yelled “benchmark theater!” They want smaller tests, real‑world mixes, and something beyond Linux on AVX2 chips. One summed it up as: huge pages, huge promises — show the receipts.

Then came the twist: the author says AI buddy “Claude” helped build it. That split the room. Some called it the future of coding (“AI pair‑programmer shipping real wins!”). Others side‑eyed the idea of an AI‑assisted B‑tree running in production, tossing around “trust but verify” memes.

A surprisingly wholesome subplot erupted when dicroce asked why anyone wants an in‑memory B+‑tree at all. The thread turned into a mini‑class: fans argued its layout keeps data tidy and cache‑friendly for huge datasets; doubters said most apps won’t notice and simpler maps are safer.

Bottom line: if these big‑tree gains stick, it’s a flex. If not, it’s another “SIMD or SIM‑don’t” moment the internet won’t forget.

Key Points

  • Fast Containers is a header-only C++23 library focused on a high-performance B+tree for x86-64 with AVX2, primarily tested on Linux.
  • Benchmarks report 2–5× faster insert/find/erase operations versus Abseil’s btree and std::map for large trees (around 10 million elements).
  • Performance gains derive from hugepage allocator integration (reducing TLB misses), SIMD-accelerated node searches using AVX2, and tunable node sizes.
  • Prerequisites include GCC 14+ or Clang 19+, CMake 3.30+, and an AVX2-capable CPU (Intel Haswell 2013+ or AMD Excavator 2015+).
  • The project originated from experimenting with AI agents; Claude assisted in implementation, and platform support is currently Linux-focused with x86-64-specific SIMD and benchmarking (rdtscp).

Hottest takes

"2-5× faster across insert/find/erase operations" — mattgodbolt
"Claude proved surprisingly adept at helping implement this quickly" — ognarb
"What are the advantages of an in memory b+tree?" — dicroce
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.