January 2, 2026
Prompt fights and diff delights
Vibe Coding Killed Cursor
Dev world splits: vibe prompts or hands-on control? The comments are chaos
TLDR: The author says chat-first “vibe coding” wastes money and ruins control, pushing AI Studio or OpenCode for clearer, human-steered changes. Comments explode: one camp calls the AI Studio pick absurd, another cheers, and power users flex terminal workflows—everyone arguing over who’s driving the coding bus.
Anton Morgunov says “vibe coding” — asking an AI to build whole apps from chat — is burning cash and patience, and it’s killing Cursor. He urges devs to keep control: chat with Google’s AI Studio or use OpenCode to see clean git-style diffs. Cue a comment brawl: tcdent calls the AI Studio recommendation “baffling,” while noo_u blasts in with a full-throated “100% agree.” The mood? Split like a broken pull request.
The core drama is simple: Do you let the bot drive, or keep your hands on the wheel? submeta goes old-school cool: “Disagree. I use Claude Code and Codex daily,” flexing a terminal setup with tmux and neovim that screams power user. Meanwhile, fans hype Gemini’s giant memory (“context window”), with manishsharan bragging they toss entire codebases in and it “unravels the hairball.” Others, like boredtofears, aren’t having sweeping model claims, warning that one-off anecdotes don’t equal truth.
Memes flew about “token taxes” and the author’s spooky “666 days on Cursor” flex — internet promptly declared vibe coding cursed. The takeaway: either diff-your-life or doom-scroll prompts. The community is loudly choosing teams, and both sides think the other is wasting time and money.
Key Points
- •The author argues that agentic “vibe coding” is token-inefficient and undermines developer control, leading him to stop using Cursor.
- •He recommends human-in-the-loop development using Google’s AI Studio with Gemini 2.5/3 Pro or OpenCode (similar to Claude Code) that displays git diffs.
- •For model selection, he suggests Anthropic’s Sonnet 4.5 for most tasks and Opus 4.5 for complex tasks, accessible via Claude Pro with higher tiers for more Opus usage.
- •He illustrates inefficiency with a landing page example where iterative prompts cause large context reprocessing (e.g., 2.2k input tokens) and sizable outputs for minor edits.
- •The post situates these recommendations within broader LLM evolution since ChatGPT, noting observed capabilities in GPT‑4 and Gemini 2.5 Pro.