January 1, 2026

Emoji-gate: AI vs Code smackdown

Building an internal agent: Code-driven vs. LLM-driven workflows

Slack emoji chaos sparks a code vs AI brawl

TLDR: Imprint moved from AI-run workflows to engineer-written scripts after the bot mistakenly tagged Slack posts as “merged.” Commenters split: some say start with code for reliability, others want AI to write the code, while practical voices brag that “janky regex” fixes the problem fast

Imprint tried to let an AI (“LLM,” a text model) run Slack clean‑up duty—auto-tagging pull requests with a “merged” emoji—and it nailed the demo but flubbed reality, slapping badges on unmerged work. Cue a pivot: code‑driven scripts for deterministic results, with AI as a helper, not the boss.

The comments turned into a soap opera. David scolded: “Why always start with an LLM?” while jaynate basically asked why this was even a debate—if a process needs certainty, just write code. dmarwicke dropped the hacker mic: “wrote some janky regex instead, works fine.” Meanwhile, Edmond brought the chaos-bridge: let AI write the workflow code, linking a demo video. Mayop100 chimed in with Tasklet.ai, where the agent builds the automations itself.

Fans of reliability rallied behind scripts; AI optimists pitched “AI writes code, then get determinism.” The funniest subplot? The word “reacji.” Some readers rolled their eyes, spawning jokes about the Emoji Police and “reacji-gate.” Memes bloomed: “Regex > LLM” tees and “Claude Code in one shot” stickers. The vibe: AI is great for brainstorming, but code is great for not breaking things—especially when a wrong emoji can stall a real review. Sides agree: save AI for judgment, lock scripts for accuracy

Key Points

  • Imprint implemented both LLM-driven and code-driven workflow modes to balance flexibility with determinism.
  • An LLM-based Slack/GitHub workflow sometimes added merge reactions to unmerged PRs, demonstrating reliability issues.
  • Their LLM workflows use a handler that selects configuration, loads tools, manages virtual files, orchestrates tool calls, and enforces termination conditions.
  • Configuration now supports a script coordinator, allowing custom Python to deterministically drive tool usage.
  • Code-driven scripts have the same permissions and data access as LLM flows and can optionally invoke an LLM via a subagent tool.

Hottest takes

“Why always start with an LLM?” — David
“You get the benefit of AI CodeGen along with the determinism of conventional logic” — Edmond
“wrote some janky regex instead, works fine” — dmarwicke
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.