A new wave of AI-powered deception is challenging democratic systems worldwide, with fake images, videos, and audio tools making misinformation more convincing than ever. Experts warn that without stronger safeguards, voters can be misled, public trust can erode, and election integrity can be compromised. This is a pressing issue for Thai readers preparing for future elections in a highly connected digital environment.
Thailand’s online landscape is vibrant yet vulnerable. High internet penetration and widespread use of social media mean information—both accurate and false—spreads quickly. To protect the public sphere, Thailand needs clear labeling of AI-generated content, better media literacy campaigns, and stronger platform moderation. These measures will help ensure an informed electorate and stable social cohesion.
Generative AI has transformed election interference. Earlier campaigns relied on human operators producing robotic scripts, but today’s tools can generate realistic deepfakes, cloned voices, and tailored misinformation at scale. A scholar from a leading European university cautions that these techniques can push content to virality in a very short time, reshaping how political narratives spread. The scale and speed of such material are unprecedented in recent history.
International assessments show AI played a role in a majority of 2024 elections, including major contests in Europe and Asia. While some AI uses supported legitimate tasks like translation and targeted outreach, many instances involved manipulation aimed at distorting public opinion. In some cases, the impact was strong enough to influence outcomes or undermine trust in democratic institutions.
A notable case involved AI-generated content used to influence a presidential election in Eastern Europe, where manipulated media helped elevate a fringe candidate and triggered legal scrutiny. This event is viewed by researchers as a warning sign of what could become a recurring pattern as AI capabilities advance and become harder to detect.
Thailand cannot remain insulated. The combination of online discourse, political polarization, and evolving AI tools creates both opportunities for civic participation and risks of disinformation. To safeguard elections, it will be critical to promote media literacy, require clear labeling of synthetic media, and ensure robust moderation by platforms operating in the country.
Global researchers warn that AI’s ability to produce convincing content at scale will outpace traditional countermeasures. A study on recent Indian general elections highlights how deepfakes and cloned media flooded social platforms, with many pieces not clearly marked as synthetic. This blurred line between real and fabricated content complicates voters’ ability to discern truth.
Research also shows that AI-generated illusions can evade detection on major platforms, challenging enforcement of disinformation policies. As platforms balance user growth with safety, they must improve detection and accountability for political manipulation, especially around elections.
There have been notable safety efforts, such as actions by major AI developers to disrupt influence operations targeting voters in multiple regions. Yet campaigns continue to evolve, illustrating both the global reach and the domestic risk of AI-driven manipulation. Some foreign and domestic actors actively amplify partisan narratives online, exploiting social tensions for political gain.
In the United States, warnings about AI-driven manipulation during recent elections underscored the need for stronger public-private cooperation. Since then, some specialized teams tasked with countering disinformation have faced downsizing, raising concerns about preparedness for rapid technological threats.
Experts emphasize that the next generation of AI models will be more adept at evading detection and tailoring messages to specific audiences. This reality calls for regulatory reforms at platform and national levels. The European Union’s Digital Services and AI Acts offer early models for platform accountability and swift removal of harmful content. Thailand’s Ministry of Digital Economy and Society has begun monitoring election-related disinformation, but more resources and cross-agency coordination are essential.
International collaboration is increasingly vital. Countries share threat intelligence, early warning systems, and counter-disinformation strategies. For Thailand, engagement with global and regional bodies can strengthen local capacity to detect and respond to AI-powered threats. Partnerships with ASEAN cybersecurity initiatives can build regional resilience against cross-border manipulation.
Practical steps for Thai society to reduce risk:
- Support verified, independent media that uphold strict editorial standards.
- Expand public education on media literacy, including how to spot AI-generated fabrications.
- Encourage political actors to publicly disclose their use of digital campaign tools, including AI.
- Hold technology platforms accountable for rapid identification and removal of misleading AI-generated content, especially during elections.
- Advocate for government oversight, timely threat assessments, and legislative frameworks that balance security with freedom of expression.
Thailand’s response to AI-driven threats will shape the health and credibility of its institutions for years. The country’s history of civic engagement and innovation offers a strong foundation for resilience, but vigilance is essential as technology evolves. A collaborative approach among voters, officials, media, platforms, and civil society will be crucial to safeguarding democratic governance.
For further context and the original investigative reporting, see global coverage from major outlets on AI and elections.