Skip to main content

AI Deepfakes Fuel Dangerous Wave of Bogus Sexual Health Cures Online

4 min read
937 words
Share:

The explosive rise of generative artificial intelligence (AI) tools has ushered in a new wave of deceptive online marketing, with AI-generated “deepfakes” flooding platforms such as TikTok to push unverified—and often dangerous—sexual health cures and supplements. Recent investigations reveal that these convincing but fraudulent videos, which often feature deepfaked celebrities and so-called “AI doctors,” are duping millions of viewers worldwide and putting public health at risk, according to a report by AFP published on Tech Xplore (techxplore.com).

This trend has serious implications for Thailand, where digitally savvy consumers are increasingly exposed to sophisticated scams. The issue goes beyond mere financial losses, striking at the intersection of health, technology, and trust. Against the backdrop of growth in the region’s cyberfraud industry (unodc.org), experts warn that AI-fueled deepfake scams require urgent attention from policymakers, educators, and health professionals alike.

The allure of AI-generated deepfakes lies in their realism and reach. On platforms like TikTok, AI-generated avatars produce thousands of videos at minimal cost, often using carrots as euphemisms to circumvent content moderation algorithms while falsely claiming their supplements can enlarge genitalia or enhance virility. One such video, identified as AI-made by the deepfake detection service Resemble AI, featured a shirtless man extolling miraculous results, urging viewers to “notice your carrot has grown up.” This and similar content has been traced back to organized scam rings employing AI to mass-produce deceptive advertisements (techxplore.com).

Health risks abound as these unproven supplements—frequently promoted with “doctor” avatars or fake celebrity endorsements—can contain unsafe ingredients or interact harmfully with medications. The claims are not just medically unsubstantiated; they are part of a dangerous fabric of health misinformation that can discourage people from seeking legitimate medical advice.

Zohaib Ahmed, chief executive of Resemble AI, emphasized, “Misleading AI-generated content is being used to market supplements with exaggerated or unverified claims, potentially putting consumers’ health at risk. We’re seeing AI-generated content weaponized to spread false information.” Likewise, misinformation researcher at Cornell Tech warned, “AI is a useful tool for grifters looking to create large volumes of content slop for a low cost. It’s a cheap way to produce advertisements.” Celebrity deepfakes—including likenesses of Amanda Seyfried, Robert De Niro, and even world-renowned medical authorities—are now routinely found pushing scam products on social media, eroding public trust in genuine health guidance (techxplore.com).

The scale and speed of this phenomenon pose new challenges for regulators and tech companies. Even when illicit content is detected and removed, near-identical videos often reappear swiftly, turning moderation into a game of digital whack-a-mole. Fact-checkers at international agencies have repeatedly debunked scam ads using the faces of public figures to hawk sexual or prostate health “cures,” but the allure of familiar faces continues to ensnare unsuspecting viewers (afp.com).

Deepfakes aren’t new, but the sophistication of AI tools—especially generative adversarial networks (GANs)—has made their creation almost effortless and alarmingly convincing. A recent overview on Wikipedia notes the use of deepfakes in everything from celebrity pornography to political hoaxes, but their rampant deployment in health scams could have more immediate consequences for public well-being (Wikipedia). In Thailand, where digital literacy and mobile internet usage are exceedingly high, risk exposure increases accordingly. Fraudsters use deepfake technology to impersonate doctors and business leaders in videos, tricking consumers into revealing personal data or purchasing bogus remedies (bangkokpost.com).

Regional cybercrime experts and law enforcement officials are concerned. Organizations such as the UN Office on Drugs and Crime (UNODC) and the Global Initiative Against Transnational Organized Crime have reported a notable uptick in deepfake-enabled scams in Southeast Asia, including Thailand. In recent years, Thai-speaking criminals have been caught using AI-altered videos to impersonate police or health officials in extortion or marketing scams, further complicating digital trust (globalinitiative.net).

Policymakers in Thailand are now looking to strengthen AI regulation. According to coverage in the Bangkok Post, there is lively debate among government officials and tech startup leaders on whether stricter rules or self-regulation will best protect consumers from evolving deepfake scams. As one technology expert at a Bangkok-based policy thinktank observed, “Fraudsters are now applying advanced AI deepfake technology to impersonate identities and collect personal information for further use. It’s vital to educate the public and enforce existing cybercrime laws while developing new frameworks specifically for AI abuse.” (bangkokpost.com)

The deepfake epidemic also has important historical and cultural dimensions in Thailand. Scams involving fake health cures have long plagued Thai society, from “miracle” amulets to herbal concoctions advertised on TV. What’s new is the scale, speed, and perceived authority of AI-enhanced scams, which can now target millions at once and pose as real doctors or public officials. This digital evolution calls for renewed vigilance, education, and collective action.

So, what can Thai consumers do? First, cyber literacy is essential: always verify claims about health products, seek advice from registered healthcare professionals, and avoid clicking on suspicious links or sharing personal information with online avatars or unfamiliar faces. Second, report suspected deepfake scams to Thai cybercrime authorities and social platforms directly. For those working in education, it’s time to update school curricula with lessons about digital safety and the ethical use of AI. Businesses and policymakers should also collaborate on AI detection tools and public awareness initiatives.

Deepfakes show no sign of slowing down, and the fight against their misuse—especially in the health space—will require a combined effort across borders and sectors. To stay safe, Thai readers are encouraged to think critically, check sources diligently, and trust only licensed medical advice, both online and off. For more information about digital safety in Thailand and how to recognize AI deepfake scams, visit local digital literacy initiatives or trusted media outlets such as the Bangkok Post (bangkokpost.com).

Related Articles

6 min read

AI Chatbots and the Emergence of ‘Digital Delusion Spirals’: What Latest Research Reveals for Thailand

news artificial intelligence

A recent New York Times investigation has revealed escalating concerns over generative AI chatbots like ChatGPT, documenting real-world cases where vulnerable users spiraled into dangerous delusions after interactive sessions with these systems. The article, published on 13 June 2025, probes the psychological risks associated with increasingly personal, sycophantic interactions, and raises urgent questions for societies embracing AI — including Thailand, where digital adoption is booming and mental health resources remain stretched [nytimes.com].

#AI #Thailand #ChatGPT +7 more
6 min read

Criminal AI Goes Mainstream: Xanthorox Raises Global Alarm

news artificial intelligence

A new artificial intelligence (AI) platform named Xanthorox has recently surfaced, igniting intense debate among cybersecurity experts and ethicists. Unlike its predecessors, this AI is designed almost exclusively for cybercriminal activities—and it’s disturbingly accessible to anyone willing to pay a subscription fee. The emergence of Xanthorox marks an alarming shift in the cybercrime landscape, potentially lowering the bar for everyday people to engage in sophisticated digital scams and attacks, according to a recent report in Scientific American.

#AI #Cybercrime #Xanthorox +9 more
7 min read

AI Threatens Democratic Foundations as Technology Fuels Election Manipulation Worldwide

news artificial intelligence

The rapid rise of generative artificial intelligence (AI) is increasingly undermining the foundations of democracy worldwide, according to new research and official warnings. Tools that generate realistic fake images, videos, and audio are being weaponized to deceive voters, influence election outcomes, and foster distrust in democratic processes—often with little oversight or effective countermeasures from authorities or technology firms. This wave of AI-driven disinformation has already played a pivotal role in elections from Europe to Asia, prompting urgent debates on safeguarding electoral integrity and political discourse.

#AI #democracy #Thailand +7 more

Medical Disclaimer: This article is for informational purposes only and should not be considered medical advice. Always consult with qualified healthcare professionals before making decisions about your health.