Skip to main content

Landmark Study Reveals AI’s Widespread Role in Scientific Writing

6 min read
1,369 words
Share:

A massive new study has uncovered detectable “AI fingerprints” in millions of scientific papers, revealing that artificial intelligence has quietly become a pervasive force in academic publishing. Researchers found that at least 13.5% of biomedical research abstracts published in 2024 showed evidence of being written with some assistance from large language models (LLMs) such as ChatGPT and Google Gemini—raising fresh questions about research integrity and the future of scholarly communication (phys.org).

The unprecedented analysis, conducted by a team of U.S. and German scientists and published in “Science Advances,” combed through over 15 million PubMed-indexed scientific abstracts. By scrutinising subtle shifts in stylistic choices—especially the rise in “flowery” or non-technical words such as “showcasing,” “pivotal,” and “grappling”—the study mapped the growing influence of AI-powered text generation tools since ChatGPT’s public release in late 2022.

For Thai researchers, policymakers, and students, the findings have immediate significance. Southeast Asia, including Thailand, is rapidly advancing in science and medical innovation, investing heavily in research infrastructure and international collaboration. As the country strives to increase its global scientific profile through English-language publications, the use of AI text generators offers both promise and perils: It can aid non-native speakers, speed up manuscript preparation, but it also challenges traditional ideas of academic authorship, accountability, and originality. The Ministry of Higher Education, Science, Research and Innovation (MHESI) as well as university administrators will likely look to this data when considering updates to ethical guidelines and researcher training programmes.

Why does this story matter? In an age when information moves at lightning speed, the integrity of the scientific literature is more important than ever. Peer-reviewed research underpins everything from disease policy and drug regulation to climate action and technology development. If a growing portion of that literature is, knowingly or unknowingly, shaped by AI models, policymakers and practitioners must be able to determine which parts of the scholarly record represent genuine human discovery and which are partially or substantially the product of algorithms. The issue is particularly important for Thailand’s academic community, which faces global competition, language barriers, and mounting pressure to publish in top-tier journals.

The key findings of the study are striking. By comparing word usage trends before and after the arrival of ChatGPT, the team detected a surge in certain stylistic expressions rarely seen in pre-LLM work. Before 2024, nearly four in five of these “excess” words in research abstracts were nouns. In 2024, verbs accounted for two-thirds, with adjectives making up another significant share. Experts say this shift hints at the increasingly sophisticated, narrative-driven, and sometimes overly dramatic style that LLMs tend to produce—an echo of their design as tools trained on billions of internet words to mimic human conversation.

One of the study’s lead authors, a computational linguist from a major U.S. research institute, commented that their method “sidestepped the need to directly identify AI-written text through imperfect detection tools.” Instead, by charting how word patterns changed across millions of papers following ChatGPT’s release, they uncovered clear, quantifiable signals of linguistic change. In other words, rather than catching the AI “in the act,” the researchers tracked its influence the way epidemiologists track the spread of disease via symptoms and trends.

This approach, inspired by studies of “excess deaths” used to understand the impact of COVID-19, has already attracted attention from journal editors and research ethics boards worldwide (lifeboat.com), many of whom are grappling with how to update policy to reflect an era of AI-assisted scholarship. Some journals now require authors to declare any AI assistance. Others are considering more robust detection, even as studies show that experienced reviewers often struggle to identify LLM-generated abstracts with accuracy (PubMed).

For Thailand, this new landscape poses unique challenges—and opportunities. Thai academics, particularly in science and medicine, often face significant hurdles writing in academic English; LLMs are increasingly turned to for translation, editing, and even first drafts (PubMed). Universities such as Chulalongkorn and Mahidol have started pilot programmes exploring responsible AI integration into scientific writing courses, while grant agencies are considering including “AI disclosure and transparency” requirements in future funding cycles.

A senior research administrator at a major Bangkok university, speaking anonymously under the Bangkok Post’s Thai Individual Reference Protocol, remarked: “LLMs make English paper writing and submissions much easier for our early-career scientists. But we worry about over-reliance and potential ethical problems, especially plagiarism or unintentional misrepresentation.” This sentiment echoes growing international concerns around “false authorship,” where scholars may rely so heavily on AI-generated content that the traditionally human, creative aspect of research communication is diminished (PubMed).

Some experts note that AI-generated language can “mask” low-quality or hastily done research, potentially flooding literature with science that sounds better than it actually is. According to a recent commentary in “Science” magazine, “The risk is not just that AI will write our papers, but that the collective voice of science will begin to sound the same—flattened, generic, uninspired.” For readers, this could make it more difficult to spot errors, misconduct, or even fraudulent results (mescomputing.com).

Nevertheless, other Thai educators and researchers see AI text generators as valuable tools, particularly for students from rural areas or under-resourced institutions. “Properly used—with oversight, training, and clear rules—these technologies can help level the playing field,” a senior lecturer at a leading northern Thai university told the Bangkok Post. “But we urgently need clearer guidelines from both the university and national authorities.”

Amid these debates, Thailand’s academic community is wrestling with how best to harness the potential of AI while safeguarding the unique intellectual contribution of its scientists. Already, the Council of University Presidents of Thailand (CUPT) has convened workshops on ethical AI use in research and announced plans for a national code of conduct by 2026. Some journals published in Thailand’s TCI (Thai-Journal Citation Index) database have started piloting new AI-disclosure sections in their submission portals, seeking to align with international best practices (TCI).

In cultural context, adapting to new technology is nothing new in Thailand. From integrating social media in everyday life to the rapid uptake of smartphone banking and e-commerce, Thai society has often balanced innovation and tradition. In academia, too, there is a long history of absorbing international trends while maintaining uniquely Thai perspectives and values—most notably, the importance of collective contribution, seniority-based mentorship, and social responsibility. The AI-authorship debate is taking place within this broader balancing act.

Experts agree that the future trajectory of AI in scientific authorship will depend on three factors: international publisher policies, the sophistication of detection and disclosure tools, and national-level guidance. Tools that compare “excess phrase” usage, as demonstrated in the current study, may soon be rolled out to journals, allowing editors to monitor the linguistic “signature” of incoming manuscripts. Detection, disclosure, and dialogue will all play important roles in building trust.

For Thai readers—whether researchers, practitioners, or students—the key practical recommendations emerging from this new research are:

  • Be transparent: When using LLMs for writing, translation, or editing, always disclose their role in your manuscript submissions.
  • Stay updated: Follow official guidelines from your institution and organizations like CUPT, MHESI, and TCI regarding AI usage and disclosure.
  • Use AI responsibly: Rely on LLMs for technical assistance, but do not outsource critical thinking, original ideas, or ethical judgment.
  • Build skills: Continue developing strong English and scientific writing abilities alongside technological skills.
  • Foster discussion: Engage with colleagues and peers about the ethical and practical implications of AI in scholarly work.

Looking ahead, Thailand’s capacity to participate confidently and ethically in the global scientific community will depend not just on innovative research—but on an ongoing conversation about how best to integrate the tools of tomorrow. AI text generators are here to stay, but it’s up to individual scholars, institutional leaders, and policymakers to ensure that their fingerprints add value, not confusion, to the ever-growing corpus of science.

For additional reading and sources, see:

Related Articles

6 min read

AI Cheating in Higher Education: Are Colleges Fighting a Losing Battle?

news artificial intelligence

As artificial intelligence becomes increasingly accessible, a silent revolution is upending global higher education, with students using AI tools like ChatGPT to complete coursework, exams, and even job application processes. According to a recent in-depth report from New York Magazine’s Intelligencer, academic cheating via AI is rapidly becoming normalized in universities across the United States. The article exposes how students now routinely offload assignments to AI, with some estimating that as much as 80% of their written work is AI-generated. This trend raises serious questions about the authenticity of academic credentials—and the future of learning itself.

#AI #Education #AcademicIntegrity +7 more
10 min read

Academic Crisis Unfolds: Research Fraud Epidemic Threatens Scientific Integrity Worldwide

news education

The foundations of evidence-based medicine, educational policy, and technological advancement face an unprecedented threat as fraudulent scientific publications proliferate at an alarming rate that now exceeds legitimate research production, creating a crisis with profound implications for Thailand’s healthcare system and educational institutions. Groundbreaking analysis published in the Proceedings of the National Academy of Sciences demonstrates that fabricated research papers double every eighteen months while authentic studies increase at the substantially slower rate of doubling every fifteen years, representing exponential growth in academic deception that could overwhelm genuine scientific discourse within the next decade. This systematic corruption of scientific literature poses particular dangers for Thailand’s ambitious development goals as a regional science and technology hub, threatening to compromise the reliability of medical treatments, educational curricula, and innovation policies that directly affect millions of Thai citizens across multiple sectors.

#ResearchEthics #ScienceFraud #AcademicIntegrity +4 more
5 min read

Surge in Fake Scientific Papers Threatens Global Research, Experts Warn

news education

A recent investigation has sounded an alarm in the global scientific community, revealing that fraudulent scientific publications are proliferating rapidly—at a rate that far outpaces the overall growth of legitimate research. The findings, published in the influential journal Proceedings of the National Academy of Sciences (PNAS) on August 4, point to a crisis that could undermine the credibility of science worldwide if unchecked. This phenomenon is not simply an issue for researchers and academics but raises pressing concerns for policymakers, healthcare providers, and the general public in Thailand and around the world.

#ResearchEthics #ScienceFraud #AcademicIntegrity +4 more

Medical Disclaimer: This article is for informational purposes only and should not be considered medical advice. Always consult with qualified healthcare professionals before making decisions about your health.