The rapid rise of generative artificial intelligence (AI) is increasingly undermining the foundations of democracy worldwide, according to new research and official warnings. Tools that generate realistic fake images, videos, and audio are being weaponized to deceive voters, influence election outcomes, and foster distrust in democratic processes—often with little oversight or effective countermeasures from authorities or technology firms. This wave of AI-driven disinformation has already played a pivotal role in elections from Europe to Asia, prompting urgent debates on safeguarding electoral integrity and political discourse.
This issue is highly significant for Thai readers, as Thailand continues to strengthen its democratic institutions and prepares for future elections in a digital, hyper-connected era. Understanding global trends and vulnerabilities around AI-driven electoral manipulation can help inform policies, voter education, and technological safeguards to protect the country’s political stability and social cohesion.
Over the last two years, the emergence of powerful generative AI tools has radically changed the landscape of election interference. Unlike earlier disinformation campaigns—often run by human “troll farms” using awkward language or formulaic scripts—AI now allows for the creation of convincing fake photos, videos, and even AI-cloned voices in local dialects with ease and speed. As noted by a professor from the National University of Political Studies and Public Administration in Bucharest, Romania, these sophisticated mechanics are “so advanced that they truly managed to get a piece of content to go very viral in a very limited amount of time.” The scale and quality with which AI can now fabricate and distribute such material is historically unprecedented (nytimes.com).
According to the International Panel on the Information Environment, more than 80% of 2024’s unusually dense calendar of democratic elections—including contests in Germany, Poland, Portugal, India, and the United States—featured some form of AI utilization. While about a quarter of AI usage was for legitimate purposes—such as translating campaign materials or targeting voter engagement—the majority of cases involved deceptive or harmful manipulation. The panel classified AI’s election role as “harmful” in 69% of surveyed cases. Cases ranged from simple misinformation to elaborate influence operations aimed at distorting public opinion and eroding trust in democratic processes.
A stark example emerged in Romania, where a covert Russian influence campaign used AI-generated content to sway the outcome of the presidential election. Fake videos and news reports—distributed with apparent authenticity—helped propel a fringe far-right candidate to unexpected prominence, leading courts to nullify the first-round results. This triggered a fresh round of campaigning but also unleashed even more fabrications in the run-up to the second vote (nytimes.com). The International Panel on the Information Environment identifies this as the first major election in which AI decisively influenced the result—a harbinger of what experts fear will become a recurring phenomenon.
Thailand is not immune to these global currents. The country’s vibrant but occasionally polarized online landscape, high internet penetration, and robust use of social media platforms such as Facebook, TikTok, YouTube, and emerging local platforms, all present both opportunities for informed civic participation and vulnerabilities to AI-enabled disinformation. Measures—such as media literacy campaigns, enhancements in platform moderation, and clear labeling of AI-generated content—will be vital in preparing for future elections and protecting the public sphere.
International researchers are sounding alarms about the dual threats posed by generative AI: the ability to create fabricated content at a massive scale and the growing sophistication that makes detection difficult. A 2024 study from the Center for Media Engagement at the University of Texas at Austin found that in India’s general election, not only did campaigners use AI to clone candidates to connect with voters, but deepfakes were also rampant, with scores of manipulated videos and audio clips flooding social media. Most were not clearly labeled as synthetic, blurring lines between authentic and fabricated campaign communication.
Worryingly, recent research from the University of Notre Dame showed that AI-generated “inauthentic” accounts could easily evade detection on major social platforms, including Facebook, Instagram, Threads, X (formerly Twitter), Reddit, TikTok, Mastodon, and LinkedIn. Despite policies barring manipulative uses and disinformation, enforcement has lagged behind, and platforms have struggled to balance commercial interests against the risks to electoral processes and public trust.
Official concern has risen in the wake of acts such as OpenAI’s disclosure that it had disrupted five major AI-based influence operations in 2024 targeting voters in Rwanda, the United States, India, Ghana, and the European Union. One Russian operation used AI to create a bot account on X supporting the far-right Alternative for Germany (AfD), quickly amassing 27,000 followers. Such campaigns are not only foreign-driven: domestic actors in a variety of countries are now using these technologies to amplify partisan narratives, often exploiting latent social divisions for political gain.
In the United States, the harmful impact of AI-driven manipulation during the 2024 presidential election generated rare bipartisan concern, with agencies such as the Federal Bureau of Investigation and the Cybersecurity and Infrastructure Security Agency issuing public warnings. However, subsequent dismantling of specialized teams under the current administration has left the country—and arguably, other democracies—less equipped to counteract rapidly evolving threats from both internal and external actors.
A professor from Florida International University who led the international panel’s survey observed, “In 2024, the potential benefits of these technologies were largely eclipsed by their harmful misuse”. She noted that malicious actors wield AI to rapidly craft and disseminate persuasive disinformation tailored to sway specific voter groups, outpacing manual countermeasures.
The consequences are reverberating widely across Europe. The European Union has opened investigations into how TikTok and other platforms responded to AI-generated disinformation campaigns targeting elections in Romania, Ireland, and Croatia. TikTok, while reporting that it removed more than 7,300 posts in the two weeks before Romania’s run-off, acknowledged difficulties in identifying all AI-generated deceptiveness in time (nytimes.com). These cases highlight the enduring challenges of robust content moderation and the speed at which harmful AI-generated material can spread in critical moments.
For emerging democracies—and Thailand in particular—the stakes could hardly be higher. The Thai government, the Election Commission, and civil society organizations are already grappling with the complexities of campaigning in a digital age, including the proliferation of online hate speech, deceptive advertising, and foreign-sponsored misinformation. AI now layers exponentially more risk on these challenges. A surge in hyper-realistic deepfakes purporting to show candidates making controversial statements could influence close races or incite unrest. Moreover, the sheer volume of AI-generated content may “pollute the information ecosystem,” as articulated by the founder of CivAI, a nonprofit focused on AI risk research. The resulting disillusionment and confusion among voters could lead to apathy, erosion of political consensus, or even rejection of democratic governance itself.
The history of election interference—whether through manipulation of traditional media, vote buying, or cyber operations—provides an important reference point. What is fundamentally new is the automation, personalization, and scalability that AI brings to these tactics. Past Thai elections have occasionally been marred by misinformation and online harassment, but there is little precedent for the sweeping precision and reach of modern generative AI. Social media literacy—an area where Thai youth and NGOs have been active—will need to evolve rapidly, with campaigns addressing not only fact-checking but also the identification and critical analysis of synthetic media.
Looking forward, experts warn that the next generation of AI models will be even more adept at evading detection and tailoring messages, making conventional safeguards and digital literacy efforts alone insufficient. Calls are rising for regulatory reform at both the technology-platform and national levels. The European Union’s Digital Services Act and AI Act provide initial models for platform accountability, algorithmic transparency, and swift removal of harmful content. Thailand’s own Ministry of Digital Economy and Society has launched preliminary efforts to monitor election-related disinformation, but experts stress that resources, cross-agency coordination, and technical investment must scale up dramatically.
From a global perspective, international cooperation has become essential, with countries sharing best practices for threat intelligence, early warning systems, and counter-disinformation campaigns. For Thailand, engagement with global bodies such as the International Panel on the Information Environment and the United Nations’ Digital Trust and Security programs can help bolster local capacity to detect and respond to AI-powered threats. Partnerships with regional neighbors—including ASEAN’s cybersecurity initiatives—can also foster digital solidarity and resilience against cross-border interference campaigns.
For Thai society, practical steps can mitigate individual and community vulnerability to AI-driven electoral manipulation:
- Support and participate in verified, independent media channels that adhere to strict editorial standards;
- Invest in public education initiatives—from schools to community organizations—that teach critical thinking and media literacy skills, including how to spot AI-generated fabrications;
- Encourage political parties and candidates to commit publicly to transparency regarding their use of digital campaign tools, including AI;
- Demand accountability from technology platforms operating in Thailand, urging them to rapidly identify and flag or remove misleading AI-generated content, especially during election periods;
- Advocate for government oversight, timely threat assessments, and legislative frameworks tailored to the AI age without stifling freedom of expression or dissent.
Ultimately, Thailand’s response to the AI-driven erosion of democracy will shape the health and credibility of its institutions for years to come. The country’s rich history of grassroots activism, inclusive debate, and innovation now faces a new and evolving test. As AI technologies continue to advance, the vigilance and adaptability of voters, officials, and the media will be ever more critical in safeguarding not only the outcome of elections but the fabric of Thai democracy itself.
For further reading and the original investigative report, see nytimes.com.