April 15, 2026

Let AI drive, watch comments collide

Agent - Native Mac OS X coding ide/harness

Open‑source Mac AI that runs your Mac — commenters clash over tokens, tone, and names

TLDR: Agent! is a free Mac app that lets AI operate your computer locally or with your own keys, no subscription. The thread explodes over API costs vs. subscriptions, whether leading with the founder’s illness is appropriate, security worries about “booby‑trapped” documents, and even pedantic fights over the name macOS.

Agent! is a free, open‑source Mac app that lets AI actually click around your desktop, open apps, run code, and even text you back using Apple’s new on‑device intelligence. No subscription, bring your own key, or run it fully local. The devs boast anti‑hallucination rules, cache‑savvy prompts, and a “don’t phone home” vibe. It’s a big swing — and the comments are even bigger.

The loudest chorus? Money and tokens. One user grumbled that paying per‑token via API “burns way faster” than their monthly plan, while others nodded toward the free, on‑device Apple mode as the real draw. Then the thread swerved into feelings vs. focus: the project leads with a note that the founder is battling cancer; some offered solidarity, but one commenter said the third‑person plea “detracts” from the work — a raw clash of empathy and presentation. Security watchdogs arrived next, warning that if the AI reads a booby‑trapped document, it could be tricked into clicking through Safari or Messages; think “robot intern, enthusiastic but gullible,” and folks want stronger guardrails. Meanwhile, comic relief: someone fixated on a menu bar emoji, and another launched a ten‑year‑late crusade over saying “Mac OS X” instead of “macOS.” Drama, tokens, typos — peak internet energy. Check it out on GitHub and bring popcorn.

Key Points

  • Agent! is a native macOS open-source AI agent integrating on-device Apple Intelligence and 17 LLM providers for automation and coding tasks.
  • New: Apple Intelligence (FoundationModels.Tool) performs multi-step UI automation locally with zero cloud tokens, falling back to cloud LLMs on failure.
  • Runtime app discovery via SDEFs and application directory scanning removes hardcoded bundle IDs and auto-extends targetable apps.
  • Prompt caching is implemented for OpenAI-format providers, with cached_tokens tracking and JSON sorted keys for cache hits; on-device token compression summarizes older turns beyond ~30K tokens.
  • Safety and reliability updates include anti-hallucination prompt rules with a 10-read guard; existing features like Xcode integration, AXorcist automation, privileged daemon, and local-only operation via Ollama/vLLM/LM Studio remain.

Hottest takes

"not going to pay with an API Key which burns through tokens way faster" — moonlighter
"It detracts from the project" — danpalmer
"The most obvious attack surface is prompt injection" — foreman_
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.