A new wave of artificial intelligence (AI)-powered propaganda campaigns has arrived, leveraging advanced generative technologies to subtly manipulate public opinion on a massive scale, according to a recent exposé by security researchers at Vanderbilt University The New York Times. Their report highlights operations led by Chinese tech firm GoLaxy, whose AI-driven influence tactics mark a stark escalation from the simplistic, bot-based interference attempts seen during previous American elections.
Unlike the low-quality, mass-messaging bots that sowed discord around the U.S. elections in 2016 and 2020, today’s AI-enabled campaigns are surgically precise, relentless, and nearly undetectable in their ability to shape digital conversations. This newest chapter in “grey-zone” information warfare raises urgent questions for democracies worldwide—including Thailand—about how to protect their public spheres and national debates from invisible manipulation orchestrated from abroad.
These findings matter to Thai readers because the landscape of online discourse and political debate in Thailand, much like in the U.S., Taiwan, and Hong Kong, is increasingly vulnerable to covert influence by AI-driven actors. The country’s robust social media usage, combined with passionate political divisions and openness to new technologies, places both citizens and institutions at heightened risk of subtle opinion-shaping campaigns engineered beyond the kingdom’s borders. Recent developments forewarn not just of international news, but of potential future threats to Thailand’s own democracy and public trust.
Security experts at Vanderbilt University detail how GoLaxy, a Chinese company with links to state entities, has built a sophisticated propaganda system by marrying generative AI with huge troves of social media and personal information. Their technology doesn’t just blast generic messages. Instead, it mines online platforms to assemble psychological profiles of individual users and groups, monitoring their values, emotional tendencies, and vulnerabilities. Using this data, GoLaxy’s AI creates realistic fake personas that can engage target users in conversations, delivering custom-tailored messages that mirror local slang, beliefs, and debating styles. Unlike clumsy fake accounts of the past, these avatars adapt quickly, mimic ordinary users, and operate at enormous scale—making their influence subtle yet pervasive.
The Vanderbilt security team’s cache of recently uncovered documents reportedly includes evidence of GoLaxy’s operations in Hong Kong and Taiwan. In Hong Kong, GoLaxy used its technology to identify and target political dissenters during the 2020 national security law crackdown, deploying fake profiles to “correct” online narratives that opposed official policy. Ahead of Taiwan’s 2024 elections, GoLaxy’s data-driven bot networks spread fake corruption allegations and deepfake videos, undermining the Chinese government’s critics and supplying recommendations on how to exploit rifts between local political factions. According to the documents, the firm created organizational maps of Taiwanese government institutions and profiled over 5,000 influential accounts—precisely the kind of groundwork that could precede future operations elsewhere (NYT).
When approached, GoLaxy representatives denied using its AI tools for bot networks or psychological profiling and rejected allegations of government control. However, its history and affiliations suggest otherwise. Founded in 2010 by a research arm of the Chinese Academy of Sciences, the company has been closely aligned with top-level security, intelligence, and military bodies in China. Its main AI platform operates with the support of Sugon, a Beijing supercomputing firm sanctioned for defense links, and DeepSeek-R1, a leading Chinese AI model. This alignment underscores how influence operations are now part of the official playbook for technologically advanced states—not mere side projects, but core strategies for projecting power and shaping public perceptions abroad.
The researchers warn that the danger lies in the stealth, scale, and speed of such campaigns. AI-generated propaganda, unlike its predecessors, isn’t limited to obvious trolls or outlandish disinformation. Instead, it creeps into ordinary digital interactions—liking posts, leaving innocuous comments, nudging debate ever so slightly—making it difficult for even the most media-savvy users to distinguish between genuine dialogue and calculated manipulation. This “gray-zone conflict” moves the battlefield from borders and airspace into the heart of daily online life.
Crucially, the documents also reveal that GoLaxy may be preparing to expand its reach beyond the Indo-Pacific. The company has reportedly built data files on at least 117 U.S. members of Congress and more than 2,000 other American political figures. While it denies targeting U.S. officials, the infrastructure suggests a readiness to operationalize its AI toolkit in new political theatres—an ominous development for Western democracies and Thailand alike (NYT).
Expert assessments echo these concerns. Authorities at Vanderbilt University urge a multifaceted response: academic researchers must urgently map the convergence of AI, open-source intelligence, and influence tactics; governments need to disrupt the infrastructure behind operations like GoLaxy’s; and tech firms must accelerate the development of AI detection systems, as even the most advanced platforms currently struggle to distinguish subtle synthetic content from real speech. “If we can’t identify it, we can’t stop it,” the researchers caution—a sentiment already echoed in recent warnings from the United Nations and digital civil society groups globally Brookings Institution.
For Thailand, a country where political discourse flourishes online and freedom of speech is constantly negotiated, the implications are profound. A proliferation of sophisticated AI-generated propaganda could threaten public trust in institutions, fuel polarization, and undermine efforts toward digital literacy and civic empowerment. With public debate increasingly shifting to LINE groups, Facebook, and emerging platforms favored by younger Thais, the risk of unseen manipulation grows as AI technology goes mainstream.
Historically, Thailand has faced various forms of information warfare, from rumor-mongering in print to coordinated social media campaigns during periods of unrest. But what sets the current threat apart is not just the scale, but also the personalized nature of the manipulations. AI systems can tailor messages to suit local culture, political context, and even specific emotional triggers, making detection all the more difficult. Traditional markers of foreign interference—awkward language, factual errors, or overly aggressive messaging—are giving way to content that feels authentically Thai and targets users’ core beliefs and anxieties.
Moving forward, this AI propaganda revolution may raise several challenges for Thailand and its ASEAN partners. If foreign state-linked operators test and refine their tactics in the region before wider deployment, mainland Southeast Asia could become a proving ground for the next generation of “hybrid” information warfare. As elections approach, or in times of national tension, AI-driven bots could exacerbate divisions, sowing discord or amplifying particular grievances through thousands of realistic digital identities.
To stay ahead, Thai authorities, universities, and social media companies will need to study the new tactics, build robust monitoring systems, and work with international partners to share intelligence. Civil society groups and educators ought to prioritize digital literacy programs, teaching people how to spot synthetic media, question suspicious narratives, and engage with news from diverse sources. The key challenge is to inoculate the public against not just falsehoods, but also more sophisticated forms of nudging, framing, and emotional manipulation that AI can now deliver seamlessly.
For ordinary Thai netizens and community leaders, the most practical steps are to develop habits of online skepticism, cross-check sources, and support independent journalism. This is especially vital for those active in political debate and social advocacy, as AI-driven personas may increasingly seek to manipulate activist networks or public forums through apparently authentic engagement. Additional caution is needed in moments of heightened national attention, such as protests, elections, or sporting events, when emotion-driven messaging may spike. The government, for its part, could consider non-partisan initiatives to build awareness about the risks of AI-fueled propaganda, while avoiding censorship or curtailing legitimate free speech.
Looking to the future, the turf of digital debate will only become more contested. As AI tools continue to learn, adapt, and deploy at a pace outstripping policy and regulation, the line between human-driven and automated conversation will blur further. Analysts predict an arms race between creators of synthetic influence networks and the AI systems designed to detect them. This cycle challenges not just the technical defenses of states and platforms, but the democratic values of openness, pluralism, and trust that predators now seek to exploit (World Economic Forum).
In sum, the rise of AI-driven propaganda signals a paradigm shift in how information and influence flow through society. For Thailand, as for all countries navigating today’s interconnected world, the call to action is twofold: recognize the threat as an urgent national concern, and empower every citizen—with the right tools, awareness, and vigilance—to defend democracy in the digital age.
Readers are encouraged to stay alert for unusual patterns of online behavior, support reliable sources of information, and participate in digital literacy initiatives. Community leaders, educators, and policymakers should collaborate to fortify public discussion spaces, encourage critical thinking, and stay updated on technological advances that could impact Thailand’s security and social cohesion. Working together, the nation can build resilience against even the most quietly corrosive forms of foreign influence.
Sources: