Hidden AI Prompts in Research Papers Spark Global Debate on Academic Integrity
A new controversy has erupted in academic circles after investigators uncovered that a group of international researchers embedded secret instructions—so-called “hidden AI prompts”—within preprint manuscripts to influence AI-powered peer review systems toward more favorable feedback. The revelations were detailed in recent reports, following a data-driven exposé that found 17 preprint articles on the arXiv platform with covert commands instructing AI models to deliver only positive reviews, avoid criticism, and even explicitly recommend the work for its novelty and methodological rigor. This manipulation was achieved through invisible white text or minuscule fonts, remaining undetected by human readers but fully readable by AI engines tasked with the review process (Nikkei Asia, ExtremeTech, Japan Times).