November 24, 2025
Prompt wars, plugin snafus
The Bitter Lesson of LLM Extensions
Bots grew up, but the comments say they still lie
TLDR: LLM extensions evolved from clunky plugins to memory and code rules, with MCP trying to standardize tool use. Commenters bicker over hallucinations, whether Skills beat MCP, and if raw computing power—not human guidance—is the real path forward, making customization both exciting and chaotic
Remember when “using an AI” meant copy‑pasting a novel into a chat box and praying? The article charts that glow‑up—from wild Plugins promising universal tool use, to simple Custom Instructions, curated Custom GPTs, sneaky Memory, repo‑level Cursor Rules, and heavyweight MCP (a tool interface). But the crowd isn’t clapping in unison. One commenter snorts that even the latest models “still hallucinate,” joking the bot now invents fake APIs and denies real ones—peak gaslighting energy. Another fires a finance meme at MCP: “If I could short MCP, I would,” while a defender says it’s ugly but necessary, like duct tape for the internet’s tools. The Skills vs. MCP subplot gets spicy: fans claim Skills finally deliver the old plugin dream, yet wonder where the hype went—blaming “MCP inertia.” Meanwhile a philosopher shows up to say the true “bitter lesson” is more compute beats human tinkering, prompting eye rolls and think‑pieces. And the dev crowd cackles over Cursor Rules—“tabs not spaces” becoming law, with the AI deciding when to obey. TL;DR: extensions grew up, but the vibe is split—some see progress, others see chaos, and everyone brought memes
Key Points
- •ChatGPT Plugins (Mar 2023) enabled LLM tool use via OpenAPI-described REST APIs but suffered from model limitations and a clunky UX.
- •Custom Instructions (Jul 2023) provided persistent, user-defined prompts to reduce repetitive context setting.
- •Custom GPTs (Nov 2023) bundled instructions, files, and actions into shareable, single-purpose configurations.
- •ChatGPT Memory (Feb 2024) introduced automatic, long-term personalization by recording and reusing conversational details.
- •Cursor’s repo-based rules (Apr 2024) moved customization into code, added scoped rules, and later let the LLM decide when to apply them; by late 2024, protocols like Anthropic’s were cited as models handled tools more reliably.