April 24, 2026
When AI writes the hot take, too
LLM research on Hacker News is drying up
“Thanks, Claude”: Hacker News melts down over AI-made research about… AI research
TLDR: An AI-assisted post showed that serious research papers, especially about AI, are appearing less on Hacker News, even though past years were dominated by trendy AI work. The comments exploded into a fight over whether the site has become lazy, too hype-driven, and weirdly dependent on AI to even analyze itself.
Hacker News just had the most 2026 argument ever: a post about how AI research papers are disappearing from the front page, written with the help of an AI, triggered a full-on community identity crisis. The author used Claude, an AI assistant, to dig through site data and show that serious research papers from arXiv (a big online research library) are showing up less, while flashy large language model (LLM) work dominated recent years.
But the real story is the comments. One camp, like user simonw, basically declared that Hacker News is bad at real science talk anyway, because people would rather argue from the headline than read an abstract. Another camp, led by latexr, roasted the post for being “AI all the way down,” complaining that there was “not one bit of original research” and no explanation of how Claude actually picked which papers “held up.” It’s meta drama: people using AI to complain about people using AI.
Then there’s the vibe check from gessha: big companies keep secrets, hardware is expensive, everyone’s tired of AI hype, and only splashy “miracle” papers hit the front page while slow, boring progress gets ignored. Commenters joked that the post reads like sponsored content for Claude and that HN has become “headline takes first, reading later.” In short: less research, more ranting, and a community wondering if it’s burned out on AI—or just lazier than it wants to admit.
Key Points
- •The author analyzed the share of arXiv-linked stories on Hacker News over time using the BigQuery HN dataset and monthly bucketing.
- •Results show a recent decline in arXiv story share, with a notable peak around 2019.
- •Among the top 100 upvoted papers of 2019, 41% were about deep learning; for 2023–2026, 59% were about LLMs or AI.
- •Examples of 2019 papers deemed to have ‘held up’ include MuZero, EfficientNet, XLNet, the PyTorch design paper, and François Chollet’s “On the Measure of Intelligence.”
- •Guesses for enduring 2023–2026 works include DeepSeek-R1, Generative Agents, The Era of 1-bit LLMs (BitNet b1.58), Differential Transformer, and the LK-99 preprint cluster.