January 8, 2026
When your video learns to hallucinate
Show HN: DeepDream for Video with Temporal Consistency
DeepDream Video Is Back: Trippy art or instant motion sickness
TLDR: DeepDream now does video with motion tracking and masks to keep the hallucinations smooth instead of flickery. Comments split between nostalgia and nausea, with creatives hyped, skeptics unconvinced, and some asking for custom styles—raising the question: visionary art tool or headache machine?
DeepDream—Google’s computer that “dreams” patterns—just hit video with smoother hallucinations thanks to motion tracking and occlusion masks. The dev says frames borrow their “dream” from the last one, so the trippy dogs and eyes hold steady instead of flickering. Cue the crowd: one user cracked, “Reminds me of my first acid trip,” while another groaned, “Looking at that video makes me sick.” And yes, the classic dog faces are back; a commenter begged for custom styles, asking if they can feed their own images.
Veterans rolled in with receipts. One nostalgic coder bragged about the 2018 DIY era: slicing videos into frames with FFmpeg, blasting them with GoogLeNet, and blending for “crude smoothing.” Today’s version adds optical flow (a method that tracks how things move between frames) and occlusion masking (so foreground objects don’t leave ghost trails). Translation: better vibes, fewer visual hiccups.
Then came the auteur energy. A hobbyist filmmaker declared this stuff would transform media, name-dropping film unions and recalling skeptics who wouldn’t touch a DeepDream short. The thread split: art kids cheering psychedelic cinema, practical folks warning motion-sickness bait. Whether you click for nostalgia, innovation, or memes, this drop proves the original DeepDream still knows how to stir the pot.
Key Points
- •The project is a PyTorch fork of neural-dream that extends DeepDream to video with temporal consistency.
- •Temporal consistency is achieved using RAFT Optical Flow to warp previous dream frames into the current frame.
- •Occlusion masking is included to detect overlapping objects and prevent ghosting artifacts.
- •A new CLI (video_dream.py) provides video-specific arguments, and model downloads support Inception/GoogLeNet (including Caffe variants).
- •Recommended video usage sets -num_iterations 1, with standard DeepDream options available for both video and single-image processing.