April 4, 2026
Wiki or Wacky?
LLM Wiki – example of an "idea file"
Build a brain wiki with AI—or a shiny old trick? Commenters clash
TLDR: A new “LLM Wiki” pitch says an AI can keep a living, linked notebook of your sources so you don’t start from scratch each time. The comments split hard: some call it old ideas in new clothes, others cheer man‑machine teamwork, while skeptics worry about bloated notes and outsourcing thinking.
The idea: instead of an AI re-reading your files every time, it builds a persistent personal wiki—a web of linked notes it keeps updating as you add sources. Think Obsidian as your bookshelf, the AI as your tireless intern, and your life as the project. Sounds dreamy… until the comments showed up.
One camp is yelling “rebrand alert!” Critics say it’s just RAG—“retrieval augmented generation,” a fancy way of fetching chunks of your files—wearing a nicer outfit. One commenter flatly called it “just RAG,” another labeled it “compaction for RAG,” and the thread briefly turned into a pitchfest as someone plugged their own tool. Meanwhile, the brain-philosophy crowd got loud: a sharp take warned this gets dangerously close to outsourcing your thinking, sparking hand-wringing about users letting a bot write their mental models. Others went full history nerd, cheering the “man + machine” partnership and dropping Licklider’s 1960 essay on “Man-Computer Symbiosis” like gospel. The performance skeptics worried about “context pollution,” urging leaner notes and choose-your-own-adventure style paths, not giant AI diaries.
So is this your private Wikipedia or just a remix of old tricks? The crowd’s split: half see a supercharged memory; half see buzzwords and bloat. Either way, the memes landed fast: “AI intern writes my diary, I take the credit,” and “I’m the PM, the bot’s the dev.”
Key Points
- •The article proposes an LLM-driven pattern that builds and maintains a persistent, interlinked markdown wiki as an intermediary between raw sources and queries.
- •It contrasts this with typical RAG workflows that retrieve document chunks per query without accumulating structured synthesis.
- •When new sources are added, the LLM extracts key information, updates entity/topic pages, and flags contradictions to keep the wiki current.
- •Users focus on sourcing and questions, while the LLM handles summarizing, cross-referencing, filing, and bookkeeping; Obsidian is used to browse results.
- •Use cases span personal tracking, research, book reading, and team wikis; the architecture outlines immutable raw sources and a wiki layer as a structured intermediary.