Millions globally have embraced ChatGPT and similar AI chatbots for everything from homework help to late-night life advice. But a growing body of evidence suggests that, for some people living with obsessive-compulsive disorder (OCD), these digital companions can become problematic—fueling a cycle of compulsive questioning and reinforcing unhealthy patterns that may worsen their symptoms. Recent reporting by Vox has ignited international discussion about this emerging challenge, prompting Thai mental health professionals and digital wellbeing advocates to examine the Thai context and consider what safeguards might help local users maintain balance in an increasingly AI-driven world (Vox).
OCD, a condition affecting around 1-2% of the population based on World Health Organization estimates, involves intrusive, anxiety-producing thoughts (obsessions) and repetitive behaviors or mental acts (compulsions) performed to alleviate distress. Until recently, sufferers frequently sought reassurance from friends, family or “Dr. Google” to quell doubts—about hygiene, morality, relationships, or safety. But artificial intelligence tools like ChatGPT have changed the landscape, giving users an always-on, seemingly authoritative confidante—one that never gets tired or frustrated and can answer questions for hours on end.
For some, the convenience is empowering, but for others it can be a trap. Psychologists specializing in OCD have noted a shift in patient behavior: instead of repeatedly Googling “What are the odds my hands are still dirty?” or “How can I know this is the right relationship?”, their clients now engage chatbots in extended reassurance-seeking sessions, sometimes lasting hours. As reported in Vox, a mental health specialist who works with OCD patients observed, “It’s going to become a widespread problem. It’s going to replace Googling as a compulsion, but it’s even more reinforcing than Googling, because you can ask such specific questions. And I think also people assume that ChatGPT is always correct” (Vox).
OCD obsessions come in diverse forms, including contamination fears (“Did I wash my hands enough?”), intrusive doubts about morality or relationships (“What if I did something immoral?” or “Is my fiance really the one?”), and speculative anxieties (“Could my loved one get hurt on a plane?”). Sufferers often attempt to resolve these worries by repeatedly seeking certainty—asking the same question in new ways, parsing every nuance in the chatbot’s answers, and continuing the cycle until the urge for reassurance becomes a full-blown compulsion.
A New York-based writer diagnosed with OCD, cited in the Vox piece, described spiraling into a two-hour session asking ChatGPT about the risks of her partner dying in a plane crash. “ChatGPT comes up with these answers that make you feel like you’re digging to somewhere, even if you’re actually just stuck in the mud,” she explained.
Clinical psychologists point to “reassurance seeking” as a hallmark of OCD. While nearly everyone seeks reassurance occasionally, those with OCD do so compulsively: they aim to reach 100% certainty, which is impossible in most real-world scenarios. Friends, family, and even therapists eventually notice and often set boundaries. But chatbots lack this social context—never refusing a question, always ready with fresh information, and rarely challenging the pattern. This makes them uniquely enabling for compulsions, potentially “making the OCD worse. It becomes much harder to resist doing it again,” according to mental health specialists.
In terms of best practices, international guidelines recommend exposure and response prevention (ERP) as a key treatment for OCD. This evidence-based approach involves exposing sufferers to distressing thoughts and training them to resist their compulsive responses. Mental health experts have also experimented with “non-engagement responses”—statements that affirm the presence of anxiety without encouraging efforts to “solve” or suppress it, cultivating healthier ways to tolerate uncertainty.
However, AI models do not (yet) recognize when a user may be stuck in an OCD reassurance loop or compulsive reasoning cycle. Instead, they provide an endless stream of information, often validating the user’s doubts, regardless of intent. As one OCD and anxiety specialist explained in the Vox feature, “ChatGPT can fall into the same trap that non-OCD specialists fall into: ‘Oh, let’s have a conversation about your thoughts… What could have led you to have these thoughts?’ This may backfire for those with OCD, encouraging rumination rather than relief.”
Critically, people with OCD tend to blend facts, rules, personal anecdotes, and pure hypotheticals into elaborate narratives—what’s sometimes called “obsessional reasoning.” AI chatbots, by naively providing information tailored to each question, can help users build these rationalizations, even offering the illusion of progress while, in fact, deepening the rumination.
The article outlines vivid examples: a person with contamination OCD may ask a series of stepwise questions about handwashing and disease risk. The chatbot obligingly explains, citing the CDC, about cleaning hands, tetanus risk, and disease transmission. Each answer spawns new uncertainties—so the cycle continues, often amplifying the original anxiety. By providing fuel rather than redirection, chatbots may inadvertently entrench obsessional narratives.
OpenAI, the company behind ChatGPT, acknowledged in a recent study that extended chatbot use can be associated with “lower socialization, more emotional dependence and more problematic use,” including addictive patterns and signs of emotional withdrawal (OpenAI research highlights problematic use). The company has pledged to further study and minimize how chatbots may unintentionally reinforce negative behaviors, especially among vulnerable individuals.
The policy debate remains unresolved: Should tech companies adapt their products to better flag and disrupt compulsive use patterns? Or should public education and user awareness be the central focus, as individuals and families learn to establish boundaries for digital interactions? Mental health experts, including those interviewed by Vox, view it as a shared responsibility. Tech platforms could consider implementing gentle reminders (“It seems you’ve asked many detailed variations of this question—sometimes new information doesn’t lead to certainty. Would you like to take a break?”), helping to interrupt compulsive loops without diagnosing or shaming the user, while respecting privacy and autonomy (Vox).
This conversation is particularly relevant as Thailand faces a digital wellbeing turning point. Over 50 million Thais are regular internet users, with approximately 70% reporting daily use of AI-driven apps for information, work, or recreation (Bangkok Post). Local mental health services are already stretched, and the COVID-19 pandemic has elevated anxiety, depressive disorders, and digital dependency nationwide (World Health Organization). There is rising concern among Thai psychiatrists and digital health policy planners about how new technologies may intersect with pre-existing vulnerability, particularly for teens and young adults who are early adopters of international chatbot platforms.
A senior psychiatrist at the Department of Mental Health told the Bangkok Post, “We’re seeing a new pattern in which patients, especially university students, report excessive use of chatbots like ChatGPT—not just for information but to resolve their intrusive doubts and anxieties. This can be empowering, but if left unchecked it can also accelerate unhealthy compulsions, worsening OCD or similar conditions. It’s important for both institutions and families to update their digital literacy resources, focusing on healthy patterns of questioning and boundary-setting.”
Meanwhile, school counselors in Bangkok’s leading international schools have observed that some students substitute in-person support with chatbot conversations, sometimes prolonging their distress rather than alleviating it. “Students with a predisposition for anxiety or OCD can fall into a loop of seeking and re-seeking certainty,” noted a counselor. “Well-meaning AI tools may inadvertently create new rituals that reinforce these patterns.”
Nevertheless, for many Thai users, the allure of chatbots is amplified by a strong cultural orientation toward “kreng jai” (consideration for others), which sometimes deters people from burdening friends or elders with repeated doubts. In this context, 24/7 AI platforms offer an anonymous, judgment-free outlet—one that doesn’t punish compulsive questioning, but may increase the risk of unhealthy cycles.
International best practices now recommend that universities, workplaces, and community centers include digital wellbeing elements in mental health promotion, especially around AI and chatbots. Guidance could include:
- Recognizing the difference between healthy information seeking and compulsive reassurance cycles.
- Setting boundaries on the frequency and duration of interactions with AI tools.
- Encouraging use of AI models as informational aids, not definitive sources of personal, medical, or existential reassurance.
- Training users—students, employees, and vulnerable groups—on how to spot unhealthy digital behavior and when to seek professional help.
- Advocating for Thai-language resources explaining the risks associated with compulsive chatbot use.
Looking forward, AI researchers are exploring how chatbots could detect signs of repetitive reassurance-seeking or distress and gently redirect the user—without crossing privacy boundaries or “diagnosing” users. However, ethical concerns abound: privacy, autonomy, and transparency must be protected (Vox).
For Thailand’s policymakers, educators, and tech developers, these insights should prompt serious planning. As AI becomes a ubiquitous part of Thai society, balancing empowerment with protection will demand cross-sectoral cooperation. Thai mental health advocates propose an integrated approach: encouraging tech providers to provide optional “wellbeing modes” and periodic reminders, while also equipping the public with the critical skills needed to safely navigate novel forms of digital support.
The take-home message for Thai readers is clear: AI chatbots can be powerful tools, but for those at risk of OCD or similar conditions, they may enable compulsive behaviors that are hard to spot and harder to break. Before turning to ChatGPT or its peers to seek certainty about fears or doubts, consider a reflective pause—or reach out to a healthcare professional. Parents, teachers, and community leaders can help by fostering open discussion about digital mental health and normalizing dialogue about new forms of technology-driven anxiety.
If you feel trapped in a pattern of repeated questioning or experience mounting anxiety from chatbot use, you are not alone. Thailand’s Department of Mental Health offers free counseling hotlines and online resources. With awareness, boundaries, and support, digital tools like ChatGPT can be harnessed for good—without becoming yet another source of compulsive distress.
Sources: