Show HN: I successfully failed at one-shot-ing a video codec like h.264

Hacker builds ‘doomed’ video codec with AI… and the internet can’t stop arguing about it

TLDR: A developer used AI to help invent a new video format that massively underperforms today’s standards, then shared the flop as a learning experiment, not a product. The comments exploded into debates over creepy AI role‑playing, lost respect for real expertise, and how hard serious tech actually is.

A coder tried to get an AI to help build a brand‑new video format from scratch, hoping to rival the tech behind YouTube and Netflix – and instead proudly unveiled a glorious failure that makes files almost twenty times bigger. The wild part? He knew it might flop and posted it anyway as a learning project, which instantly turned the comment section into a boxing ring.

On one side you’ve got the "engineers with clipboards" scolding him for skipping the boring planning phase. One commenter basically said, you’re supposed to design this carefully in a simple language first, then translate it to fast code, turning the thread into a live lecture on How Real Grown‑Up Software Is Supposed To Be Done. Another crowd couldn’t get over the fact he asked the AI to role‑play real, named video compression experts, calling it "a bit creepy" and wondering if that actually made the system dumber, not smarter.

But the spiciest take was about what AI is doing to us. One user ranted that tools like this make people think huge, complex systems are trivial, and that we’ve lost respect for the years of expertise behind the tech we use every day. Still, many quietly loved the vibes: a self‑aware, over‑ambitious experiment that failed loudly, taught a ton, and gave the internet a new toy to roast and admire at the same time.

Key Points

  • Sinter is a patent-free experimental video codec built with Claude Code agent teams to test one-shot AI team workflows.
  • At comparable luma quality (~49 dB), Sinter’s output is about 18.6x larger than H.264 on a 256x256, 30-frame test.
  • The implementation (~5,000 lines of C) includes lapped transforms, hybrid PVQ/scalar quantization, rANS entropy coding, and P-frame inter prediction.
  • The compression gap is attributed to missing tools: sub-pixel motion compensation, B-frames, PVQ overhead, and fewer entropy contexts.
  • A patent-safe improvement ceiling is estimated at 4–6x H.264 with half-pel motion and better contexts; matching H.264 would require adopting its toolset.

Hottest takes

"You had Claude create a team of agents imitating actual specific people? That’s a bit…creepy" — wrs
"Generally you have to plan and design the codec first, then ask for a reference implementation" — ronsor
"AI has completely destroyed people’s ability to appreciate the effort and domain knowledge" — ComputerGuru
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.