May 14, 2026

Caught editing with AI red-handed?

EditLens: Quantifying the extent of AI editing in text (2025)

New AI detector claims it can spot when a robot polished your homework—and commenters are already calling war

TLDR: EditLens claims it can measure how much AI changed a person’s writing, not just whether AI was involved. Commenters weren’t fully sold: the big debate is whether AI detectors are useful at all, or just another endless arms race—with a startup already cashing in.

A new research project called EditLens says it can do something that sounds straight out of a school plagiarism panic: figure out not just whether artificial intelligence wrote something, but how much of a human draft got “robot-tidied” along the way. The researchers say their system can tell apart fully human writing, fully AI writing, and the in-between stuff where a person writes first and a chatbot cleans it up. They even tested it on Grammarly-style edits, which instantly raises the stakes for students, office workers, and basically anyone who has ever hit “improve my writing.”

But in the comments, the real show was the skeptic squad. One of the strongest reactions came from users arguing that “AI detecting AI” is doomed to become an endless cat-and-mouse mess. The hottest take: this kind of system will always be chasing a moving target, because people and tools will just learn how to dodge it. In other words, commenters were serving up a big “nice try, but good luck” energy. There was also a side-eye-worthy twist when another commenter noted that the research has already been turned into a business by Pangram, which sells AI detection through an application programming interface, or API—a tool other apps can plug into. That dropped a little commercial-drama flavor into the thread: is this science, a product pitch, or both? The jokes practically wrote themselves: the grammar checker may now need a grammar checker checker.

Key Points

  • The paper studies AI-edited text, arguing that many large language model interactions involve editing human text rather than fully generating new text.
  • It proposes lightweight similarity metrics to quantify the magnitude of AI editing when the original human-written text is available.
  • These similarity metrics were validated with human annotators and then used as intermediate supervision for model training.
  • The EditLens regression model reportedly achieved state-of-the-art results, including 94.7% F1 on binary classification and 90.4% F1 on ternary classification.
  • The authors say the method has implications for authorship attribution, education, and policy, and they plan to publicly release the models and dataset.

Hottest takes

"always be unreliable" — andyfilms1
"a losing battle against a moving target" — andyfilms1
"commercialized by a company called Pangram" — deminature
Made with <3 by @siedrix and @shesho from CDMX. Powered by Forge&Hive.