Jellyfin LLM/"AI" Development Policy

Jellyfin draws a hard line: AI can help code, but humans must talk

TLDR: Jellyfin banned chatbot-written posts and set strict rules for AI-assisted code, with an exception for translation. Commenters split between cheering the human-first stance, calling for perma-bans, and suggesting a universal AI etiquette, making this a bellwether for how open-source handles AI going forward.

Jellyfin just went full hall monitor on AI: no chatbot-written comments, issues, or pull request text—zero, nada—while tightly policing any AI-assisted code. The community reaction? A popcorn-worthy split. One heavy LLM user cheered the human-first rulebook, basically saying, “If I wanted robot prose, I’d ask a robot.” Another went full law-and-order, demanding instant, permanent bans for offenders. Meanwhile, others applauded the clear rules as a way to turn AI into a tutor, not a code blender. The policy even makes room for translations, and commenters like ChristianJacobs backed that vibe: non-native English is welcome, just be honest about using a translator. The hottest talking point was Jellyfin’s ban on “vibe coding” (aka tossing random AI spit-outs at the codebase). Commenters turned it into a catchphrase while debating how strict the clean-up should be. Some want a universal [“Agent Policy”] for AI tools—think etiquette and safety manual for bots—so contributors know the house rules before they mash “generate.” Others shrugged, “Seems perfectly legit,” predicting it’ll create better, more thoughtful contributions. TL;DR of the drama: Jellyfin wants code quality and human accountability, the crowd’s split between “finally!” and “go harder,” and “vibe coding” is the new meme in dev land.

Key Points

  • Jellyfin issued a formal policy on LLM use across its official projects and community spaces.
  • LLM-generated text is prohibited in issues, comments, PR bodies, and forums; only LLM-assisted translations are allowed with explicit disclosure.
  • LLM-assisted code must be concise, focused, and split into manageable commits; unrelated changes lead to rejection.
  • Formatting and quality standards are mandatory; excessive unhelpful comments, spaghetti code, and LLM metafiles (e.g., .claude configs) are not allowed.
  • Contributors must understand and explain changes, ensure code builds and tests pass, and address review feedback.

Hottest takes

"Do not send me something an LLM wrote, if I wanted to read LLM outputs, I would ask an LLM." — hamdingers
"Should just be an instant perma-ban (along with closure, obviously)." — lifetimerubyist
"we will need a "PEP-8" for LLM / AI code contributions" — giancarlostoro
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.