November 29, 2025
GPU glitz vs. brainy bits
I Know We're in an AI Bubble Because Nobody Wants Me
Veteran coder says it’s all GPUs, no brains — comments erupt over efficiency
TLDR: AI veteran Pete Warden says the industry prefers buying expensive hardware over hiring teams to make AI cheaper and faster. Commenters clash: some say optimization happens quietly in-house, others claim cloud giants avoid it because efficient “local AI” would undercut their business—this matters for cost and control.
Pete Warden, an early deep learning OG behind Jetpac and mobile TensorFlow, posted a raw, funny-sad rant: the AI bubble is real because nobody wants optimization—just more hardware. He says companies are splurging on pricey graphics chips and mega data centers instead of hiring teams to make AI run cheaper and faster. The comments? Absolute fireworks. One reader cheered the hustle but asked if the industry is really choosing “spend more” over “optimize and spend less.” Another swung back with a blunt “This is just not correct,” arguing if optimization mattered, big firms would do it in-house.
Then came the spicy plot twist: a commenter warned that if AI gets truly efficient, it goes local—on your phone—and cloud giants lose their cash cow. That take lit up the thread with “GPU-hoarding” jokes and eye-rolls at “optimize bros.” Others suggested Pete just wants to run the show his way, while another pointed out that top talent quietly slides into big labs (hello, acquihires and OpenAI shoutouts), so the “nobody wants optimization” narrative might just be off the radar. Bottom line: vibes are split between “hardware hype train” and “efficiency revolution,” with a dash of meme-y doom about cloud monopolies. Drama served.
Key Points
- •In 2012, Pete Warden, then CTO of Jetpac, began using AlexNet and trained models via CudaConvNet on a dual-GPU gaming rig.
- •Amazon’s GPU instances were costly and better suited to video streaming, and Caffe’s CPU support focused on training, not inference.
- •Warden created DeepBeliefSDK to run inference at scale on low-cost hardware, using hundreds of cheap Amazon EC2 instances.
- •The framework was efficient enough to run on phones; after Jetpac was acquired, Warden led mobile support for TensorFlow at Google.
- •Warden argues current AI investment prioritizes hardware (GPUs, data centers, power) while funding for ML infrastructure engineering is limited.