A recent comprehensive study has revealed a dramatic uptick in the use of ChatGPT and similar large language models (LLMs) in drafting scientific papers, especially in the field of computer science—a trend that is rapidly reshaping how academic research is communicated worldwide. The findings, published in Nature Human Behaviour on August 5, 2025, offer the clearest evidence yet that generative artificial intelligence has begun to play a pivotal role in scientific writing, prompting both excitement and concern across the global research community (Phys.org).
The news is significant as it highlights both the accelerating adoption of LLMs and the unresolved questions that accompany this dramatic shift. For Thai academics, students, and policy-makers, understanding these trends is crucial: not only do they reflect global changes certain to impact Thailand’s own research landscape, but they also raise challenging ethical, linguistic, and quality-related questions for the kingdom’s growing scientific community.
The study, conducted by a group of international researchers, analyzed an immense dataset of 1,121,912 scientific papers and preprints sourced from popular online repositories including arXiv and bioRxiv, as well as journals in the Nature publishing portfolio. The researchers used an advanced methodology—tracking population-level shifts in word frequency patterns—to estimate the extent to which LLMs have modified scientific texts from January 2020 through September 2024. Their results indicate that AI-language assistance is now affecting core sections of scientific papers, primarily the abstracts and introductions, far more than previously believed.
Notably, in the field of computer science, 22.5% of abstracts and 19.5% of introductions were estimated to be LLM-modified by September 2024, up from a mere 2.4% in late 2022, just before ChatGPT’s release. This jump demonstrates a rapid and unparalleled adoption of AI-assisted writing in a discipline already closely tied to digital innovation. Similar, though slightly less dramatic, increases were also observed in electrical engineering (18% for abstracts) and systems science (18.4% for introductions). In contrast, mathematics—often reliant on precise symbolic reasoning—saw more modest figures: 7.7% of abstracts and just 4.1% of introductions carried the textual fingerprints of LLMs. Even in the prestigious Nature journal portfolio, LLM use reached nearly 9% in both key sections.
Several key drivers for the proliferation of LLM-modified content emerged from the study. Papers penned by authors who regularly post preprints were more likely to include AI-generated text, perhaps due to the intense pressure to publish quickly. Shorter papers of under 5,000 words, and those originating in highly competitive fields, demonstrated higher rates of LLM usage as well. The study also found that the impact of AI writing assistants was unevenly distributed around the world: papers coming from China and continental Europe showed higher proportions of LLM-altered language compared to those from North America and the United Kingdom, a trend researchers believe is mainly due to the need for English-language support.
The research underlines a critical complication for both AI detection and academic fairness—current text-detection methods can disadvantage non-native English speakers, potentially penalizing those who rely on LLMs primarily for clarity and accuracy rather than for substantive content generation. For Thailand, where the majority of scientific communication is conducted in English yet a relatively small proportion of researchers are native speakers, this challenge is especially pronounced. Thai academics may increasingly feel pressured to use AI tools to keep pace with global standards of fluency, raising questions about equitable access to these technologies and the ethical use of AI for linguistic support.
Expert perspectives are divided on the rapid integration of LLMs in science. The authors of the Nature Human Behaviour study caution that as LLMs become more entrenched, issues of transparency, scientific originality, and the overall diversity of research could be at risk. Quoting directly from the research team’s published statement: “Our observations of the rise of generated or modified papers open many questions for future research. How do such papers compare in terms of accuracy, creativity or diversity? How do readers react to LLM-generated abstracts and introductions? How do citation patterns of LLM-generated papers compare with other papers in similar fields? How might the dominance of a limited number of for-profit organizations in the LLM industry affect the independence of scientific output?”
Within Thailand’s academic sphere, the shift toward AI writing assistants is already being debated. University lecturers and members of national research council boards (as referenced anonymously to maintain privacy) express both optimism and concern: some argue that LLMs empower non-native speakers and democratize international publishing, while others fear that overreliance may erode students’ core writing skills, dilute the creative aspect of scientific inquiry, and create a two-tiered academic community divided by technological access.
AI’s role in research communication is not limited to writing assistance. Globally, many prominent journals are still developing or revising institutional policies to address LLM use, while traditional plagiarism detection tools often fail to recognize text generated by modern AI models. In Thailand, policy responses have included the issuance of recommended AI-use guidelines by several leading universities and the Office of the Higher Education Commission. These efforts align with international calls for transparency, requiring authors to disclose any substantial use of AI tools in the preparation of manuscripts (Nature journal policy), yet enforcement and best-practice models remain fluid and contentious.
Interestingly, the deployment of LLMs is not just a matter of policy—it also intersects with long-standing practices in Thailand’s research environment. As in many Asian countries, Thai researchers often face pressures to publish in international journals for academic advancement, funding, or even institutional rankings. The use of LLMs may help to bridge the linguistic gap and accelerate the writing process, but it also raises the specter of “hyperproductivity” at the expense of research depth and originality—a phenomenon known locally as “publish or perish.”
Global experts warn that unchecked LLM use could encourage formulaic writing, potentially narrowing the diversity of scientific expression. The risk, as outlined by the Nature Human Behaviour authors, is that academic communication could become overly standardized, subtly shaped by the priorities and limitations of a handful of for-profit AI firms that build and control the underlying language models.
Concerns about LLMs and scientific publishing go beyond questions of authorship and style—they touch on fundamental issues of trust: if readers, reviewers, or editors cannot reliably determine which parts of a paper have been AI-generated, the integrity of research records may be undermined. For developing economies such as Thailand, where building scientific credibility and international recognition is a key strategy for driving higher education reform, such doubts could have profound long-term effects.
There is also a cultural context worth noting. Thailand places high value on both technological advancement and educational rigor. As AI continues to permeate academic life, the kingdom faces unique pressures. On one hand, embracing innovation is seen as a sign of progress and a requisite for staying competitive in the era of the digital economy. On the other hand, the concept of edukarn—roughly, the process of learning and self-cultivation—remains a cornerstone of Thai academic culture, suggesting that easy shortcuts such as LLM writing could be interpreted as undermining the spirit of learning itself.
Looking ahead, several trends are likely to shape the impact of AI writing tools on scientific publishing, both in Thailand and globally. First, the robustness of LLM detection technologies will almost certainly improve, potentially offering clearer ways to balance transparency and fairness. Second, ongoing debates about research ethics and policy will likely result in more standardized guidelines for authors, editors, and reviewers. There is also scope for new educational initiatives to help students and early career researchers use LLM tools responsibly and develop their own academic voices.
What can Thai researchers, educators, and students do now in response to these sweeping changes? While there is no simple solution, experts interviewed for this report recommend a three-pronged approach. First, actively seek training on AI writing assistance—learn how to use ChatGPT and similar tools transparently and responsibly. Second, engage in open discussions within academic communities about the ethical boundaries of LLM use, including when and how to disclose AI assistance. Third, institutions should develop clear but flexible policies tailored to local needs, supporting equal access to high-quality AI support tools and addressing the unique challenges faced by non-native English speakers.
Ultimately, while ChatGPT and its peers offer transformative potential for science communication, the challenge lies in ensuring that their benefits are harnessed without compromising the diversity, creativity, and integrity that make scientific research meaningful—both in Thailand and abroad. To stay ahead, Thai educational authorities and academic institutions will need to closely monitor both international developments and local experience, updating policies and teaching practices in line with the shifting landscape of scholarly communication.
For those wishing to learn more, readers are encouraged to consult the full research summary on Phys.org and the Nature Human Behaviour journal policy pages for additional context on AI and research ethics.