May 2, 2026
Cheap bot, expensive drama
DeepSeek V4–almost on the frontier, a fraction of the price
AI fans are freaking out over a super-cheap new chatbot—and fighting over the fine print
TLDR: DeepSeek’s new AI looks almost top-tier while costing far less than big-name rivals, which is why people are paying attention. But the comments split hard between fans praising its fewer refusals, skeptics warning about hidden usage costs, and critics asking why privacy concerns suddenly vanish when the price is low.
DeepSeek just dropped two new AI models, and the comment section instantly turned into a price-war pep rally with a side of paranoia. The big headline is simple: this new release looks shockingly cheap compared with the biggest names in AI, and that has people acting like they just found designer clothes in a bargain bin. One tester said the pricier version feels like a premium rival in personality, then promptly stress-tested it on a large coding project to see whether the low price was too good to be true. That set the tone: excitement first, skepticism immediately after.
The strongest opinions were wildly split. One camp basically said, "Finally, an AI that just does what I ask", especially for touchy tasks where other tools allegedly refuse or wave safety warnings. That sparked the classic internet reaction: cheers from the freedom crowd, side-eye from everyone else. Another camp slammed the whole celebration, asking why people rage when Western companies train on user data but suddenly go quiet when a cheaper Chinese model enters the chat. That was the thread’s real drama bomb.
And then came the accountant-energy spoilers: several commenters warned that the bargain may not be as bargain-y as it looks if the model burns through extra "thinking" words behind the scenes. Even the jokes had a bite—people were basically treating DeepSeek like the budget airline of AI: cheap ticket, but check the hidden fees before you board. Meanwhile, Simon Willison’s now-traditional pelican-on-a-bicycle test gave everyone a weirdly wholesome mascot for the chaos.
Key Points
- •DeepSeek released two preview V4 models, DeepSeek-V4-Pro and DeepSeek-V4-Flash, on 24 April 2026.
- •Both models are Mixture of Experts systems with 1 million-token context windows; Pro has 1.6T total parameters and Flash has 284B total parameters.
- •The models are offered under the MIT license and hosted on Hugging Face, with reported sizes of 865GB for Pro and 160GB for Flash.
- •The article compares DeepSeek pricing with OpenAI, Gemini, and Anthropic models and reports that Flash and Pro are the cheapest options in their respective comparison groups.
- •DeepSeek’s paper claims major efficiency improvements over DeepSeek-V3.2 for 1M-token contexts, including lower single-token FLOPs and reduced KV cache size.