January 6, 2026
Next‑token or next‑level?
The Agentic Self: Parallels Between AI and Self-Improvement
AI now takes notes, talks to itself, and role‑plays—fans hype, skeptics yell “that’s not thinking”
TLDR: The article says new AI “agents” work better by taking notes, thinking in private, and role-playing jobs to plan, build, and review. Commenters split: some applaud practical gains, others say it’s just next-word tricks and human-style thinking comparisons are misleading—but the workflow still matters.
Forget chatty bots—2025’s AI “agents” are power planners: they write notes, talk to themselves, and even role‑play as Architect, Engineer, and Critic to get real work done. The piece likens these tricks to self‑help: journaling, inner monologue, alter egos. They argue that scratchpads, memory buffers, and self‑talk loops turn chatty parrots into steady problem‑solvers. Think Beyoncé’s “Sasha Fierce” for coding—minus the sequins.
Top drama: DeepSeek style “think first” got roasted as “just hidden text bubbles,” with tensor dropping the mic: “half of people don’t have an internal monologue.” Translation: stop pretending bots think like brains. llIIllIIllIIl asked if “thinking” is just predicting the next word. Meanwhile, blibble quipped that Manuel Blum’s legendary writing advice only works “if you have unlimited paper,” spawning memes about AIs hoarding sticky notes and “Sasha Fierce for code.”
So is agentic AI genius or productivity cosplay? Fans say these simple habits—write it down, reason in loops, switch personas—really boost reliability. Skeptics call it anthropomorphic fan‑fic. The thread turned into a split‑screen: results vs. philosophy, with one camp chanting “ship it,” and the other asking, “But is it thinking?”
Key Points
- •The article argues AI focus shifted in 2025 from conversation to agentic action.
- •It links agent improvements to human practices: writing, self-talk, and role-playing.
- •To address LLM context limits, agents use scratchpads, plans, and memory buffers to externalize state.
- •Hidden reasoning (internal monologue) and iterative loops (Act/Write → Reason → Repeat) are presented as improving reliability.
- •Role prompting and multi-agent setups (Architect/Engineer/Critic) are said to constrain search and yield better results.