A surge in AI-generated “deepfakes” is fueling a dangerous wave of bogus sexual-health cures and supplements sold online. Reports show convincing videos featuring fake doctors and celebrity likenesses are duping millions, risking public health rather than providing genuine care. The phenomenon, highlighted by AFP and picked up by Tech Xplore, underscores how quickly deceptive content can spread in Southeast Asia and beyond.
In Thailand, digitally savvy consumers are repeatedly exposed to sophisticated scams that blend health myths with technology. Data from regional cybercrime researchers indicates a notable rise in scams that misuse AI to impersonate medical authorities and push unverified products. This creates financial losses and erodes trust in legitimate health guidance, making urgent policy and education responses essential.
The appeal of AI deepfakes lies in their realism and scale. On platforms like TikTok, AI-generated avatars publish thousands of videos at minimal cost, often masking fraudulent claims with euphemisms and ambiguous terminology. A typical video might use a subtle phrase to skirt moderation while claiming a supplement can enhance vitality or bodily function. Investigations identify these clips as generated by organized scam rings that mass-produce misleading advertisements, aiming to exploit viewer trust.
Health risks are real when people use unproven supplements promoted by fake doctor avatars or celebrity likenesses. Such products can contain unsafe ingredients or interact adversely with medications. This misinformation not only misleads patients but can dissuade people from seeking legitimate medical advice.
Industry voices warn that misleading AI content is weaponized to spread unverified claims, threatening consumer safety. Experts note that AI lowers costs for scammers, enabling vast volumes of content at speed. Celebrity deepfakes—imitating famous actors and authorities—are increasingly used to promote bogus products, further eroding confidence in authentic health information.
Regulators and tech platforms face new challenges: removing counterfeit videos is often followed by quick reappearances, turning moderation into a continuous race. International fact-checkers have debunked many ads featuring public figures hawking sexual-health cures, yet familiar faces keep attracting unwary viewers. In Thailand, the risk is heightened by high digital literacy and widespread mobile use, which amplifies reach for deceptive videos.
AI deepfakes are not new, but current tools make creation easier and more convincing. The technology can convincingly imitate doctors and officials, complicating digital trust and privacy. In Thailand, where online access is extensive, risk exposure rises as scammers use deepfakes to obtain personal data or secure sales of counterfeit remedies.
Regional experts and law enforcement warn of broader implications. Southeast Asian authorities report rising deepfake-enabled scams, including in Thailand, with criminals sometimes using AI-altered videos to pose as police or health officials. This dynamic demands coordinated action across agencies and borders.
Thai policymakers are exploring stronger AI regulation. Coverage from major Thai outlets notes ongoing debates about stricter rules versus self-regulation to protect consumers from evolving deepfake scams. One technology policy expert argues that it is crucial to educate the public, enforce cybercrime laws, and tailor frameworks specifically for AI misuse, while promoting digital literacy and safer online behavior.
The Thai context has historical resonance: scams around fake health cures have long existed, from “miracle” items to herbal remedies advertised on television. What’s new is the speed, scale, and perceived authority of AI-enhanced schemes that can reach millions. The moment calls for renewed vigilance, broader education, and cross-sector collaboration to safeguard public health.
What can Thai consumers do now? Strengthen cyber literacy by verifying health claims with licensed healthcare professionals, avoiding suspicious links, and resisting sharing personal information with unfamiliar online personas. Report suspected deepfakes to cybercrime authorities and platform operators. Educators should integrate digital-safety lessons into curricula, while businesses and policymakers collaborate on AI-detection tools and public-awareness initiatives.
Deepfakes are unlikely to disappear, so a sustained, cross-border effort is needed to counter their misuse in health marketing. Thai readers should practice critical thinking, verify sources, and trust only licensed medical advice. For ongoing guidance on digital safety and recognizing AI-driven scams, consult trusted national outlets and local digital-literacy programs in Thailand.