December 18, 2025
SQL that brings the heat
Show HN: Spice Cayenne – SQL acceleration built on Vortex
Is Cayenne hotter than CedarDB? The spice-level debate kicks off
TLDR: Spice AI launched Cayenne to speed big-data SQL using Vortex, promising faster queries and lower memory. The thread kicked off with a CedarDB comparison and skepticism over bold speed claims, while jokes about “spice levels” kept things lively—everyone wants benchmarks before buying the hype.
Spice AI just dropped Cayenne, a data accelerator aimed at making huge data searches feel instant by leaning on the open-source Vortex format. The pitch: faster queries and lower memory than usual engines like DuckDB and SQLite, plus a “hybrid search” that mixes AI-style vector similarity, full‑text, and keywords—all in plain SQL. There’s even an AI angle: turn SQL into prompts and safely sandbox large language models (think text-generating AI), with a flashy demo to prove it.
But the community mood? Spicy. The very first volley was the classic scoreboard check: “how does this compare to CedarDB?”, setting the tone for a day of side‑by‑side benchmarking demands and raised eyebrows at “up to 100x faster” claims. The thread vibe leans skeptical-but-curious: can Cayenne really outpace the usual suspects without guzzling RAM? Meanwhile, the branding got roasted: jokes about “spice levels” and whether “Paprika” is next, plus memes about SQL becoming “the new prompt.” Expect the standard Show HN ritual—requests for real numbers, breakdowns on object storage like Parquet/Iceberg/Delta in plain English, and a tug-of-war over whether we need another engine or just better docs. It’s performance promises vs. proof, with snark on the side.
Key Points
- •Spice AI introduced Spice Cayenne, a next-generation data accelerator built on the Vortex columnar format with an embedded metadata engine.
- •Cayenne targets high-scale, low-latency data lake workloads and claims faster queries and lower memory usage than DuckDB and SQLite.
- •The Spice platform offers hybrid SQL search (vector similarity, full-text, keyword), LLM inference, secure AI sandboxing, and AI model serving.
- •Operational features include real-time change data capture, distributed query scaling, edge-to-cloud deployments, and an MCP Server & Gateway.
- •Spice focuses on object storage and supports open data lake formats (Parquet, Iceberg, Delta), with integrations to 30+ data sources and both cloud and OSS docs.