Artificial intelligence is quietly slipping into the world of scientific publishing. More and more academic papers show clear signs of having been written — or at least polished — using tools like ChatGPT. Phrases such as “According to my last knowledge update,” “regenerate response,” and “as an AI language model” have been spotted in publications that often fail to disclose the use of such technology.
Concerned by this trend, researcher Alex Glynn from the University of Louisville (USA) decided to take action. In a study published last year, he revealed one of the first cases of undisclosed AI usage in a scientific paper. “The text said: ‘I am an AI language model.’ It couldn’t have been more obvious,” he told Nature. “And somehow, no one noticed — not the authors, not the editors, not the reviewers.”
Glynn went on to create Academ-AI, an online tracker that lists publications with signs of AI-generated content. Phrases like “Certainly, here are” — typical of chatbot responses — are among the red flags. So far, the tool has identified over 700 suspect papers, with 13% appearing in journals published by major publishers such as Elsevier, Springer Nature, and MDPI.
He’s not alone in this effort. Professor Artur Strzelecki of the University of Economics in Katowice (Poland) has also compiled a list of 64 papers from respected journals that failed to disclose AI use. “These are places where we’d expect solid editorial work and thorough peer review,” Strzelecki told Nature.
Publishers have taken mixed stances on the issue. Some require authors to declare AI usage, while others don’t. Springer Nature, for example, allows AI-assisted editing — such as grammar corrections and style improvements — without requiring disclosure.
The truth is, science, once safeguarded by strict editorial and ethical standards, now faces a new challenge: telling apart what was written by humans from what came from machines. And clearly, that’s not always easy.