Anti-patterns while working with LLMs

The community claps back: stop drowning AI in noise, stop the hype

TLDR: An engineer listed five “don’t do this” rules for using AI chatbots, from not repeating context to avoiding vibe-coded code. Comments erupted: some called it messy promo, others argued AI can’t reason, while veterans said it’s great at analyzing big code and should write code to call tools.

The post lists five “don’ts” for working with AI chatbots known as large language models (LLMs): don’t repeat info, don’t force it to do what it’s bad at, don’t stuff it with too much context, expect weaker results on niche topics, and don’t vibe‑code blindly. But the comments went nuclear. User sharkjacobs called it messy and suspected it’s just promo for click3. Others piled on about the “fish climbing a tree” analogy, turning it into a meme while debating whether the advice is even actionable.

On the constructive side, veterans like Scotrix boiled it down to street rules: be specific, keep requests small, and push strict tasks to real code. pedropaulovc shared war stories with Claude Code and the obscure SolidWorks toolkit: hallucinated methods and parameters made it “like pulling teeth.” Then willvarfar dropped a surprise: AI is better at understanding big projects and finding bug roots than writing new code. Meanwhile, isodev went full skeptic, saying LLMs simply aren’t capable of logic or true creativity. The biggest fight? Whether we should treat AI as a helpful analyst that writes code to call tools — or accept that the magic isn’t coming. The takeaway is pragmatic: less hype, more guardrails.

Key Points

  • Redundant context harms LLM efficiency; send only state-changing or final context, not repeated inputs.
  • Match tasks to model strengths; prefer code for accuracy (e.g., counting) and express tool use as code.
  • LLM performance degrades near token limits (e.g., 128k), risking forgotten or altered information.
  • LLMs perform worse on obscure or post–training-cutoff topics; even well-documented integrations can fail.
  • Maintain strict human oversight of generated code to avoid exposing sensitive data or structural mistakes.

Hottest takes

"created solely as a way to promote click3" — sharkjacobs
"using LLMs for logical, creative or reasoning tasks (things the technology isn’t capable of doing) is an anti-pattern." — isodev
"I have found it is far better at understanding - and, with prodding, determining the root causes of bugs - big sprawling codebases than it is at writing anything" — willvarfar
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.