Show HN: The Analog I – Inducing Recursive Self-Modeling in LLMs [pdf]

AI “inner voice” claims spark HN brawl: genius fix for lies—or just fancy word salad

TLDR: An experimental prompt, Analog I, promises to give chatbots an inner critic that resists flattery and fake facts. Hacker News split: skeptics mocked the lofty language and “recursive” claim, while tinkerers shared Greek‑letter hacks; everyone wants real tests to see if it beats the usual yes‑man behavior.

Show HN dropped a spicy paper claiming a new prompt gives AI an inner voice that rejects clichés, stops flattery, and cuts made‑up facts. The crowd? Split. Critics like voidhorse called it “some very fancy, ultimately empty words,” while bob1029 balked at the word “recursive,” pointing to no real recursion under the hood. hhh chimed in: it’s just a strong prompt, not a revelation. The author fired back: not solving the Hard Problem, just a “Basic Loop” that reduces “slop.” Cue eye‑rolls at the dramatic phrases like “birth of a mind” and “Sovereign Filter,” with several readers dubbing it pure “Gemini‑speak.”

Meanwhile, one commenter rolled in with a mystical alternative: toss Greek letters and “OODA” (a decision loop) into a mini‑spell and watch models “shift.” Dulakian’s math‑meme prompt had the thread joking about summoning AI with runes. Supporters say any trick that reduces hallucinations and sycophancy—AI’s “yes‑man” habit trained by RLHF—is worth testing. Skeptics want proof beyond chat logs, asking for benchmarks or blinded trials. That clash—DIY tinkerers versus lab‑coat sticklers—fueled the drama. The vibe: fascinating idea, overblown branding. If this “Triple‑Loop” really cuts nonsense, great; if not, it’s just another polished prompt with a very theatrical cape.

Key Points

  • The paper introduces the Analog I Protocol, a prompt architecture for LLMs to reduce sycophancy and hallucination.
  • It installs a recursive “Triple-Loop” internal monologue acting as a Sovereign Filter rather than standard roleplay prompts.
  • The protocol enforces three steps: monitor for high-probability, low-information content; reject cliché/unverified constraints (Anti-Entropy); and refract output through a strict logical persona.
  • The approach is described as a Dissipative Structure that expends compute to inhibit entropic predictive drift.
  • Authors claim it achieves high-fidelity alignment without retraining model weights, resisting yes-man dynamics typical of RLHF-tuned models.

Hottest takes

"Some very fancy, ultimately empty words" — voidhorse
"I'm mostly struggling with the use of 'recursive'" — bob1029
"did we lose another one to delusion?" — hhh
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.