November 6, 2025
Bug bounty or BS bounty?
AI Slop vs. OSS Security
Open-source volunteers drown in AI bug spam while Big Tech shrugs
TLDR: Maintainers say AI-generated bug reports now swamp open-source projects, with real issues buried in noise. Commenters split between demanding hard proof, pushing reputation filters, and blasting Big Tech freeloading—warning that volunteer time is being wasted while security gets slower and riskier.
The bug-bounty world is having a meltdown, and the comments are pure fire. A veteran triager says maintainers like curl now see roughly 20% of security reports as AI-generated “slop,” while only about 5% are real. Translation: for every legit bug, four fakes eat hours of volunteer time. The community’s mood? Exhausted, annoyed, and ready to gatekeep.
One camp, led by pksebben, calls out the core issue: LLMs (large language models) don’t know truth, they just sound convincing. He dreams of a “truth layer” to fix hallucinations and salutes curl as the xkcd Jenga block holding the internet together. The labor angle gets spicy too: wwfn says this is classic “wealth built on underpaid work,” arguing Big Tech would care more if open-source licenses forced them to invest. Meanwhile, the receipts crowd arrives: dvt wants hard proof—tests, logs, screencasts—before any report gets oxygen, explaining CVE (the public list of security flaws) shouldn’t be cosplay.
There’s even a “referral ladder” pitch from Jean-Papoulos to boost veteran accounts and filter noise, which sparked gatekeeping jokes and Hunger Games memes. And goalieca sums up the vibe: AI nails the form of research, not the substance. The drama? AI confidence versus human credibility, with volunteers stuck clearing a never-ending inbox of “plausible” fiction.
Key Points
- •The author has a decade of experience in bug bounty work, including five years at HackerOne in triage and technical services leadership.
- •AI-generated reports often lack true codebase understanding and rely on pattern-matching to suggest implausible vulnerabilities.
- •Misaligned incentives encourage high-volume submissions, increasing noise in vulnerability reporting.
- •Curl’s maintainer Daniel Stenberg observes ~20% of submissions are AI-generated while genuine findings have dropped to ~5%.
- •False reports demand substantial volunteer time to investigate and disprove, increasing the human cost for maintainers.