April 8, 2026

From chaos to “second brain” chic

Your File System Is Already A Graph Database

Folders = Second Brain? Fans cheer, skeptics say “just search it”

TLDR: One engineer claims your normal folders and links, viewed in Obsidian, already act like a smart knowledge web that AI can navigate for better work. Comments erupted over structure vs. “just search,” and privacy: fans love context-rich results, skeptics want flat files, and local models still lag big cloud AI.

The internet’s new fixation: turning your messy folders into a “second brain.” Inspired by Karpathy, one engineer says they’ve got 52,447 notes in Obsidian and no fancy database—just files, links, and an AI helper. The pitch is simple: your folders and wiki-style links already form a network of ideas, and an AI (a “large language model”) can navigate it like a pro.

The crowd is split and loud. One camp is hyped on context engineering—as user alxndr puts it, it’s the difference between asking for a doc and giving the AI six months of notes, old designs, and Slack debates. Another camp says “cool, but private”: embedding-shape wants local, offline AI but admits nothing beats the big corporate models—even with a wild flex of 96GB of VRAM. Meanwhile, WillAdams is naming files so neatly they double as spreadsheet entries, basically weaponizing folders for office ops. exossho confesses they’ve tried a zillion structures and lets AI tidy the chaos, but worries about keeping order.

The spice? itake asks why we need folders at all—why not dump everything in one place and let search do the work. PARA fans (Tiago Forte’s system) clap back: structure = better prompts = better results. Meme of the day: “My hard drive is a graph database now,” followed closely by “I don’t need a second brain—I need a first.”

Key Points

  • The author manages a large Obsidian vault (52,447 files) and uses AI without vector stores, RAG, or special databases—only files on disk.
  • The approach treats the file system as a graph database: files as nodes, wikilinks as edges, and folders as taxonomy.
  • An adapted PARA taxonomy structures notes into projects, areas, people, daily, and meetings directories.
  • An agent automates daily note creation, downloads attached Google Docs, links meeting notes to people and projects, and aggregates context over time.
  • LLMs query and synthesize from this accumulated context to draft documents (e.g., design docs), enabling “context engineering” beyond cold prompting.

Hottest takes

“a context engineering system” — alxndr
“no local model comes close… even with 96GB of VRAM” — embedding-shape
“Why not a flat list of files and let the AI agent explore” — itake
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.