April 17, 2026
Mediocre or miracle?
Average Is All You Need
AI churns out ‘good enough’ work—cue cheers, panic, and pacemaker jokes
TLDR: A new tool, [rawquery](#), lets AI turn plain English into charts, shrinking days of data work to minutes. Fans cheer the time-saver, while skeptics fear layoffs and argue “average” isn’t safe for high‑stakes tasks—raising the pressure on humans to deliver the exceptional.
The post claims the age of “average” has arrived: large language models (LLMs, the chatty AIs) can now crank out decent charts and reports from your data in minutes. With rawquery, you connect tools like Stripe (payments) and HubSpot (email), ask in plain English, and the AI writes the database stuff and draws the charts. No meetings, no messy “attribution models,” just quick answers. Fans call it liberating and practical—“actual magic,” said one, giddy that days of grunt work drop to an hour.
But the comments lit up with drama. A doomier camp asked: if AI can do the “average,” what’s left for us? One commenter joked the boss will ask the bot, then ask you to pack. A chorus of sarcasm followed: “Average is enough? Cool—enjoy your car that starts 50% of the time,” plus the inevitable pacemaker punchline. The middle ground? “Average is all you need, if your needs are average.” Another warned that without something exceptional, you have no moat—AI can’t breeze through the truly hard, creative bits. Verdict: the tech is impressive, the time savings are real, and the vibe is split between “this is magic” and “this is my job,” with meme-fueled gallows humor keeping score. Read the post and bring popcorn.
Key Points
- •The article introduces rawquery, a data platform built to be operated by LLM agents for data analysis.
- •Users can connect sources like Stripe and HubSpot, then ask questions in plain language to generate SQL, run queries, and create charts.
- •A CLI-based example shows connecting Stripe and HubSpot, syncing data, and querying outcomes of an email campaign.
- •The agent returns an example result indicating the email cohort had a 46% higher average basket without manual attribution modeling.
- •Users can iteratively refine analysis in natural language (e.g., breaking results down by week).