The Problem with LLMs

Author says AI is theft; commenters cry hypocrisy, boredom, and Buddha jokes

TLDR: A nonprofit dev blasted AI tools as “plagiarism machines” tied to Buddhist ethics. Comments clapped back: most don’t care, some mocked the moral stance, and others called out hypocrisy and real job pain for translators. It matters because the AI debate is shifting from tech to culture and livelihoods.

A nonprofit developer dropped a moral bomb, calling Large Language Models (LLMs—programs that remix what they’ve learned from tons of online data) “plagiarism machines” and tying their use to Buddhist rules against stealing and lying. The internet’s response? A loud, collective shrug peppered with spicy side‑eye and memes.

The strongest vibe: “We already don’t care.” One commenter flatly said the world has moved on from the plagiarism panic, while another stopped reading at the first mention of it—community fatigue is real. Then came the zingers: “Buddha would not approve,” snarked one user, turning ethics into a punchline. Others called out what they saw as hypocrisy: the author reportedly experiments with LLMs and considers them fine for translation, which led to a clapback that translators are starving while AI eats their lunch.

There’s meta‑drama too: calls to stop upvoting low‑effort “AI good/AI bad” takes unless they offer fresh thinking. The split is clear: one camp rails against theft and dishonesty, the other says everyone’s already using AI and moral purism is just vibes. Amid the chaos, the comment section became the main show—ethics versus convenience, sīla versus silicon, and a whole lot of “lol who cares” energy.

Key Points

  • A blog post by Vijay Khanna prompted consideration of using LLMs to accelerate development of the nonprofit Pariyatti mobile app.
  • Pariyatti’s ethical code (sīla) informs the evaluation, with emphasis on not stealing and not lying.
  • The article argues LLMs inherently involve plagiarism due to training on copyrighted materials and lack of attribution.
  • Early GitHub Copilot reportedly reproduced training data verbatim before patches addressed the issue.
  • The author claims open-source licensing is often incompatible with LLM training and equates consuming LLM output with using pirated media.

Hottest takes

“Virtually nobody cares about this already... today.” — bayarearefugee
“Give it up. Buddha would not approve.” — bronlund
“LLMs are evil! Except when they're useful for me” — bambax
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.