March 11, 2026
Clickers vs coders cage match
Why does AI tell you to use Terminal so much?
AI won’t stop saying “Open Terminal” — clickers vs keyboard warriors
TLDR: The story: AI keeps pushing Terminal commands because it understands text better than buttons, but that can be risky and sometimes just wrong. The comments split into two camps—automation lovers who want speed and consistency, and safety-first clickers who want guardrails and fewer copy‑paste disasters.
Why does every AI helper sound like a drill sergeant yelling “Open Terminal”? The community says it’s simple: LLMs (large language models) speak text, not buttons. As one deadpan reply put it: “Because it’s text.” Meanwhile, the article dishes a cautionary tale: AI told a user to run long, nerdy log commands and even mixed up macOS’s default shell (it’s zsh, not bash), while bragging about “Apple’s internal security logs” — which don’t exist. Yikes.
That lit a bonfire in the comments. The automation crowd insists Terminal is faster and universal — clicking through Disk Utility is “not it,” says one. Another adds AIs were trained on text, not screenshots, so of course they’re better at telling you what to type than where to click. But the safety squad fires back: UIs have guardrails and “am I about to nuke my Mac?” feedback. Copy‑pasting mystery commands feels like juggling chainsaws.
The drama peaks with real-world receipts: one user swears ChatGPT patiently walked them through a tricky Linux Mint install, mostly by Terminal. Another warns AI can be confidently wrong — and dangerously so. Verdict? Until AIs can actually “see” our screens, Terminal is their love language, and users are stuck in a meme war of clicks vs commands.
Key Points
- •AI assistants frequently recommend macOS Terminal commands, while humans often direct users to GUI tools like Disk Utility.
- •LLMs are text-based, making it easier for AI to produce command-line steps than to describe GUI workflows.
- •Command-line guidance has risks: limited user understanding, fewer safeguards, overwhelming output, and potential for malware via pasted commands.
- •ChatGPT’s example on malware detection included inaccuracies: referencing non-existent “internal security logs,” using bash despite macOS defaulting to zsh, and overly broad predicates.
- •Empirical log retention shows limited availability, undermining advice to query 30 or 365 days and resulting in excessive, less useful output.