News: 0179374412

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

AI Tool Detects LLM-Generated Text in Research Papers and Peer Reviews

(Friday September 19, 2025 @04:20PM (msmash) from the it's-spreading dept.)


An analysis of tens of thousands of research-paper submissions has shown a dramatic increase in the presence of text generated using AI in the past few years, an academic publisher has found. Nature:

> The American Association for Cancer Research (AACR) found that 23% of abstracts in manuscripts and 5% of peer-review reports submitted to its journals in 2024 contained text that was [1]probably generated by large language models (LLMs). The publishers also found that less than 25% of authors disclosed their use of AI to prepare manuscripts, despite the publisher mandating disclosure for submission.

>

> To screen manuscripts for signs of AI use, the AACR used an AI tool that was developed by Pangram Labs, based in New York City. When applied to 46,500 abstracts, 46,021 methods sections and 29,544 peer-review comments submitted to 10 AACR journals between 2021 and 2024, the tool flagged a rise in suspected AI-generated text in submissions and review reports since the public release of OpenAI's chatbot, ChatGPT, in November 2022.



[1] https://www.nature.com/articles/d41586-025-02936-6



Fart logic: (Score:2)

by A10Mechanic ( 1056868 )

You smelt it, you dealt it.

Not terribly surprising (Score:2)

by godrik ( 1287354 )

Of course, this being /. I have read TFS but not TFA.

what is it that they are detecting really. Some text was generated by an LLM is not terribly surprising. Lots of language tools like akin to grammarly are powered with LLMs.

The abstract was partially written by an LLM is not really a problem either. Abstracts are summary and LLM are somewhat decent at that. As long as you proofread for accuracy, it's probably fine.

Now if the paper and the results are LLM generated, then yeah, it's an issue.

Re: (Score:3)

by AmiMoJo ( 196126 )

I've heard complaints from autistic people that they have been accused of being AI due to the way they write. I bet the false positives are pretty bad with this one.

Re: (Score:1)

by innocent_white_lamb ( 151825 )

I write stories about hardboiled detectives, usually interacting with fairy tale and fantasy characters.

My writing style is deliberately contrived, with over-the-top metaphors and lots of archaic slang.

I wonder if this tool would tell me that I'm an AI based on that?

You're Totally Right (Score:2)

by lordDallan ( 685707 )

This paragraph isn't generated by an LLM. My mistake, ha! I'll make sure to do better next time

What are the odds this AI hallucinates about finding passages written by AI? Feels very slippery slope and not addressing the real problem of there generally being too many bogus papers submitted and not enough staff to review them.

Re: (Score:3)

by allo ( 1728082 )

All AI detectors are known for a large false positive rate. Don't rely on them, you'll probably do harm to people who didn't use AI.

Begun, (Score:1)

by bonedonut ( 4687707 )

The AI wars have

Who's gonna guard the guard? (Score:3)

by devslash0 ( 4203435 )

If AI algorithms are fundamentally unreliable, are they allowed to grade their own homework?

Accuracy? Relevance? (Score:2)

by Roger W Moore ( 538166 )

How accurate is this tool for modern text through? It claims it is 99.85% accurate on text generated before 2021 but styles and use of language change over time, especially in the sciences. As the article itself notes there may be a false positive rate that is increasing over time as our use of language diverges from what it was trained on. Also it cannot differentiate between passages written by AI vs. written by humans and edited by AI and the later is exactly how AI should be used.

Then there is the qu

Adversarial Networks (Score:2)

by darkain ( 749283 )

We already have adversarial network training methods. Each time I see a "tool that detects AI" I can only imagine the AI tool makers are also going to start using these tools to do adversarial training on their models / outputs to bypass whatever checks these tools do.

Then again, we're getting closer and closer to XKCD's reality: [1]https://xkcd.com/810/ [xkcd.com]

[1] https://xkcd.com/810/

Using a LLM to create text... (Score:2)

by MpVpRb ( 1423381 )

...is fine, if you honestly disclose how it was created

Using an LLM to create text and claiming that you wrote it is fraud

That's pretty dumb (Score:2)

by 50000BTU_barbecue ( 588132 )

All my text including this is generated by a LLM called my brain.

I doubt that (Score:2)

by nospam007 ( 722110 ) *

Many people use AI to 'beautify' their own texts, or correct them stylistically or make them sound more scientific, legal or whatever, especially since LOTS of them aren't native English speakers.

That's not 'generating'.

Never make anything simple and efficient when a way can be found to
make it complex and wonderful.