April 8, 2026
Roomba Rumble in the comments
Show HN: We built a camera only robot vacuum for less than 300$ (Well almost)
Hackers cheer, skeptics jeer: DIY cam‑only robo‑vac sparks “map it or bump it” brawl
TLDR: Two roommates built a camera-only robo‑vac that streams video to a laptop to navigate, with mixed results. Commenters split between “you need mapping and sensors” and “try depth tools and better training,” turning a DIY clean-up into a debate over cheap hacks versus real-world cleaning power.
Two roommates built a budget robot vacuum that “sees” with just a camera and off‑the‑shelf parts—no fancy laser sensors, no onboard computer. Instead, the bot streams video to a laptop that decides where to go, trained on the duo’s own driving demos. Their field notes are delightfully chaotic: random reverse moves, stopping in open space, and occasionally charging at obstacles. Even they admit the training “isn’t super well” and the kitchen turns into a ping‑pong match between forward and reverse.
Enter the comments, where the dust really flies. One camp says the model simply memorized its training runs—“needs way more data”—with some urging a pricey vision+language model to teach it. Another camp preaches realism: “you need mapping,” the boring-but-true method most vacuums use to cover rooms systematically, not just bump and pray. A helpful middle ground appears: try a depth trick like Apple’s Depth Pro to estimate distance from a single camera, or even start with a laser distance sensor and later train the camera to imitate it.
Meanwhile, the memes vacuum up the room: “laptop‑on‑a‑leash,” “see‑no‑depth cleaner,” and “the vac that vacuums your patience.” Some cheer the scrappy under‑$300 build; others want real‑world clean floors, not a science fair. Verdict: a gloriously messy showdown between DIY hustle and hard‑won robot reality.
Key Points
- •DIY robot vacuum built with off-the-shelf parts under a $500 budget and weekly charging goal.
- •Robot streams camera frames to a laptop for inference due to lack of onboard compute.
- •Training data collected via teleoperation; actions were FORWARD, REVERSE, TURN_CCW, TURN_CW, STOP.
- •A simple CNN trained with behavior cloning showed issues: false reversals, oscillation, mispredicted STOP/FORWARD, weak turning signals.
- •After adding a train/validation split, validation loss was very low; next steps include data augmentation and pretraining to improve generalization.