November 29, 2025
Light speed or light hype?
Show HN: Zero-power photonic language model–code
Laser-pointer AI claims “zero power”—commenters fire back with puns, doubts, and “meds”
TLDR: A maker claims an AI that “thinks” with a laser and plastic film, trained on a laptop and allegedly runs with “zero power.” Commenters clap back: show a real bench demo, and don’t ignore the laser and electronics—while the thread ping-pongs between wonder, puns, and a one-word “meds” mic drop.
A hacker just dropped “Entropica,” a DIY language model that swaps electricity for a laser pointer and plastic transparencies to “think” with light. It trained on a laptop in under two hours and spits out tiny kids’ stories using a 1,024‑word vocabulary, with code and weights released for all. The post brags about “zero-power” inference and even jokes this could run in orbit—cue the comment fireworks.
Skeptics pounced. One top mood: show us the lab demo. As user bastawhiz put it, they’ll “believe it when [they] see it… on a workbench.” Another wave zeroed in on the power claim. “’Zero power’ does not include the power needed… and the light source,” snapped cpldcpu, calling out the laser and the electronics that feed and read the thing. Others asked how it could work at all without the usual brainy tricks—no electrical amplifiers, no non-linear functions—while a drive‑by heckler just dropped “meds,” instantly becoming the thread’s meme.
Still, the dreamers loved the audacity: a $30 laser, printed masks, and unitary math that supposedly makes words fall into place via the brightness of light. Is this the future or just a very bright idea? The vibe: half amazed, half “light on details.” Read the paper and decide.
Key Points
- •Entropica implements a language model’s forward pass as a passive linear-optical interferometer enabling zero electrical power during inference.
- •The model uses a 1024-dimensional complex state and 32 unitary layers (Mach–Zehnder meshes, Reck architecture), with token probabilities from the Born rule.
- •Training with cross-entropy on a restricted TinyStories-derived corpus achieved coherent generation in under 1.8 hours on a single Apple M3 Pro.
- •The dataset includes ~4,000 synthetic “TinierStories” generated using ChatGPT 4.1 with a 1,000-word vocabulary and period-only punctuation.
- •A complete optical implementation path is described using printed phase masks, a ~$30 650 nm laser diode, and a photodiode array; code and weights are publicly released.