December 6, 2025
Empathy, em-dashes, and AI whiplash
Using LLMs at Oxide
Oxide’s AI rulebook drops; commenters debate trust, ethics, and em-dashes
TLDR: Oxide published a cautious, values-first guide for using AI at work, keeping humans accountable and warning about data privacy. Comments split between praise for the nuance, calls to keep AI away from production, and ethical worries about “trained on stolen data,” plus jokes about em‑dashes and “intellectual fly open”.
Oxide just dropped a values-first guide to using AI chatbots at work, and the comments section lit up like a Friday deploy. The company’s stance: AI is a tool, humans own the output; use it to read and summarize big docs, but watch privacy settings and don’t hide behind “the bot did it.” Speed is great, but not at the cost of responsibility, rigor, empathy, and team trust.
Fans called it a rare, grown‑up take. One reader cheered the line that using obvious AI fluff is “like walking around with your intellectual fly open,” which instantly became the thread’s meme. Another asked for guidance aimed at junior engineers, though Oxide’s team skews senior.
But the skeptics brought heat: if the tool needs that much hand‑holding, “don’t use it near production,” snapped one critic. Others wanted Oxide to weigh the public blowback over models “trained on stolen data,” warning that ethics—and brand risk—can’t be an afterthought.
And then there’s the comedy beat: multiple folks clocked the post’s love of em‑dashes while it cautioned about writing quality. Another adored the empathy section, a rare vibe in AI policy. Verdict from the crowd? Thoughtful policy, spicy caveats, and a side of punctuation drama
Key Points
- •Oxide’s LLM usage is grounded in company values: responsibility, rigor, empathy, teamwork, and measured urgency.
- •Humans retain responsibility for any artifacts produced with LLM assistance; human judgment must remain in the loop.
- •LLMs can enhance rigor by exposing reasoning gaps but can degrade quality if used carelessly.
- •LLMs excel at reading comprehension and summarization of large documents, with relatively low downside.
- •Uploading documents to hosted LLMs (e.g., ChatGPT, Claude, Gemini) requires ensuring data privacy and disabling training on uploaded content, often via opt-out settings.