Awful AI is a curated list to track current scary usages of AI

Internet piles on “Awful AI” list — artists, privacy hawks, and tech bros clash

TLDR: A public “Awful AI” list highlights biased, creepy, and dangerous uses of AI and urges awareness via Zenodo. Commenters erupt: artists demand focus on AI exploiting their work, privacy advocates cheer the receipts, and tech optimists call it fearmongering — all agreeing we need better guardrails now.

The new "Awful AI" list is basically a horror anthology for robot mischief, and the comments are pure chaos. The list catalogs creepy use cases like biased face-tagging, fake-news engines, and surveillance tech — complete with an "Annual Awful AI Award" — and invites people to spread the word via Zenodo. But the real fireworks are in the replies.

Artists stormed in first, saying the list downplays how AI chews up their work and livelihoods. One user begged for more focus on art theft and scammy platforms, while another wanted AI pointed at space and medicine instead of our faces. Privacy folks waved receipts: the infamous “Twitter autocrop prefers boobs” meme, the chatbot that turned toxic in a day, and skin-tone failures that mislabel darker-skinned people — all held up as Exhibit A. Tech optimists pushed back, calling the list "doom bait" and insisting bias can be fixed, sparking a spicy thread about whether awareness is responsible or just fearmongering.

Humor landed hard too: commenters dunked on “AI Gaydar” as the worst party trick ever, and joked that the "Depixelizer" turning Obama white should win the Awful Award twice. Between laughs, the crowd agreed on one thing: we need guardrails — and probably fewer robots judging our faces.

Key Points

  • Awful AI is a curated list documenting harmful and concerning uses of AI, intended to raise awareness and spur preventive solutions.
  • The project categorizes AI misuses into areas such as discrimination, disinformation, surveillance, data crimes, social credit systems, scams, climate impacts, and military uses.
  • Examples under discrimination include biased outcomes from Google’s dermatology app, Microsoft’s Tay chatbot, Google’s and Amazon’s image recognition systems, and Zoom’s face recognition.
  • Additional cases include a depixelizing algorithm that altered Barack Obama’s image and Twitter’s biased image autocrop feature; the list also flags biases in ChatGPT and LLMs.
  • The initiative references external reporting and research, can be cited via Zenodo, and includes sections on contestational AI efforts and an annual award.

Hottest takes

"Some of the most awful AI usages are in the artistic realm" — nephihaha
"I would rather AI was put to more use in astronomy or healing medicine" — nephihaha
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.