In a surprising twist to the promise of artificial intelligence in medicine, a recent study published in a leading medical journal found that doctors who used an AI tool to flag precancerous growths during colonoscopies showed a weakening of their own detection abilities when the tool was withdrawn. After three months of real-time AI assistance, their ability to spot the growths on their own dropped from about 28% to roughly 22%. The finding, though based on an observational study, raises questions about whether AI can improve care in the short term while eroding essential clinical skills in the long term.
The study unfolded across four endoscopy centers in Poland. During the intervention phase, physicians performed colonoscopies with an AI system that colored-boxed suspicious lesions in real time, providing a visual cue to guide their attention. In earlier periods, without the AI, detection hovered around the 28% mark. When the AI was taken away, performance slipped below the baseline to about 22%. Researchers did not prove causation; the uptick in total procedures during the AI period might have reduced the thoroughness of each exam. Yet the pattern aligns with a growing concern about “deskilling” — the idea that reliance on automation can subtract from a clinician’s capacity to perform key skills unaided.
“This is a two-way process,” said Dr. Omer Ahmad, a gastroenterologist who published an editorial alongside the study. “We give AI inputs that affect its output, but it also seems to affect our behavior as well.” The implication is not that AI is inherently bad, but that human operators and machines influence each other in ways that can dull core competencies if not managed carefully. The researchers emphasized that the question of causality remains unresolved, but the signal is strong enough to warrant attention from medical educators and health systems.
Experts outside the study acknowledge the complexity. Dr. Robert Wachter, a prominent voice on AI in medicine, pointed to a familiar pattern from other technological revolutions. “There are plenty of harmless examples of new technology making old skills obsolete,” he noted, drawing a parallel to the stethoscope’s role in history. The key difference, he argued, is that AI-based tools continuously adapt and require ongoing oversight. Algorithms are trained on a snapshot of time, and as conditions change, their outputs may drift if not monitored and maintained. For physicians, that means a continuing responsibility to recognize when AI is right or wrong and to stay proficient in the fundamentals that underlie their craft.
If there is a warning here, it lies in the transition period. In the study, the doctors—who averaged around 27 years in practice—were highly experienced. The possibility that less experienced clinicians, or those still in training, could experience even more pronounced deskilling raises questions about how to structure education and ongoing competency assessments as AI becomes more deeply embedded in clinical workflows.
The authors and many experts also highlighted potential mitigations. One practical approach is to treat AI as a powerful assistant rather than a replacement for human discernment. As Dr. Chris Longhurst of UC San Diego Health explained, simulation-based training can help clinicians rehearse procedures without AI, preserving muscle memory and decision-making skills. Other centers have started similar programs, and some medical schools are considering restricting AI access during early training years to ensure students first build hands-on proficiency. The overarching message is clear: any AI integration must be paired with deliberate strategies to maintain, and even grow, clinician competency.
For Thai readers, the study’s implications are both timely and consequential. Thailand has been moving forward with digital health initiatives and AI pilots across major hospitals, particularly in Bangkok and regional medical centers. Endoscopy is among the procedures where AI-assisted detection tools are being explored to improve early cancer detection, a priority for Thai public health given the country’s substantial burden of gastrointestinal cancers. The core takeaway for Thai health systems is straightforward: AI can amplify a clinician’s capabilities when used thoughtfully, but it can also erode essential skills if clinicians become overly dependent or if training lags behind technology adoption.
The Thai context adds nuance to the deskilling conversation. In Thai hospitals, physicians often face high patient volumes and resource constraints in urban centers while rural facilities manage limited access to specialists. AI promises to streamline screening workflows and reduce missed lesions in busy clinics, but the deskilling risk underscores the need for balanced deployment. It also highlights the cultural dimension of medical care in Thailand. Thai patients place strong trust in physicians, and decisions are often framed within family discussions and respect for medical authorities. As AI tools become more visible in consultation rooms, clinicians will need to maintain transparent communication with patients about how these tools influence care and ensure that technology serves to enhance, not replace, human judgment.
From an educational perspective, the study reinforces a broader global trend: the era of AI in medicine will demand new kinds of training. In Thailand, medical educators and hospital leadership can draw from these findings to design curricula that blend AI literacy with rigorous hands-on practice. Simulation labs, which are increasingly present in Thai teaching hospitals, can be expanded to include AI-augmented scenarios that allow trainees to switch off AI assistance and demonstrate competence in lesion recognition and endoscopic technique. Such programs help cultivate the cognitive stamina and situational awareness that may not be fully captured by automated systems.
There are several concrete steps Thai health authorities and institutions could consider. First, implement structured retraining cycles for clinicians who use AI in high-stakes tasks like endoscopy. Short, regular refresher sessions that emphasize manual review and independent decision-making can help maintain core skills. Second, institutionalize hybrid workflows in which AI serves as a first-pass screen but requires explicit human adjudication before any clinical decision is made. Third, invest in ongoing data-driven quality assurance that tracks both AI-assisted and non-AI performance, with a clear protocol for escalation if AI outputs diverge from clinician judgment. Fourth, expand endoscopy simulation programs and incorporate AI-agnostic drills to ensure that physicians can perform with precision even when AI support is temporarily unavailable. Fifth, embed ethics and communication training so clinicians can navigate patient conversations about AI involvement, preserving trust and the human-centered dimensions of Thai medical care.
The Lancet study also invites a broader reflection on how technology shapes medical culture in Thailand. In a country where religious and cultural practices inform attitudes toward health, the integration of AI into clinical care must be framed within a narrative of care, compassion, and social equity. Hospitals could use community outreach at temples and schools to explain how AI tools assist clinicians without eroding the personal attention that patients expect from their doctors. Public messaging should emphasize that technology augments expertise rather than replaces it, aligning with Buddhist values that emphasize mindful attention to harm, wisdom in decision-making, and the humility to seek help when needed.
Looking ahead, researchers cautioned that more evidence is needed to understand the long-term impact of AI in different specialties. The Polish study adds to a growing chorus that AI in medicine is not a panacea; it is a tool whose value depends on the surrounding ecosystem of training, supervision, human oversight, and ongoing skill development. For Thai healthcare, the implication is not to abandon AI, but to govern its use through thoughtful policy, resilient education strategies, and robust clinical governance that keeps patient safety at the center.
In practical terms, Thailand’s health system should prioritize the development of a clear framework for AI-assisted procedures that explicitly addresses skill maintenance. This includes standardized competency benchmarks, regular performance audits, and mandates for AI-free drills in essential procedures. It also means fostering a culture that values continuous learning and humility before technology—traits that Thai medical professionals have long demonstrated, especially when guiding families who seek reassurance in times of illness. If done correctly, AI can reduce missed lesions, accelerate diagnosis, and support clinicians in delivering timely care, while rigorous training and oversight ensure that doctors retain the skills that keep patients safe when AI is not available.
The key takeaway for Thai audiences is both cautious and hopeful. AI tools are not inherently dangerous; unfettered reliance is. The opportunity lies in integrating AI in ways that preserve and even enhance human expertise. That means intelligent design of training programs, deliberate practice with and without AI, and a patient-centered approach that respects cultural values and the primacy of compassionate care. If Thai institutions adopt a balanced path, the benefits of AI—earlier detection, faster triage, and more efficient workflows—can be realized without sacrificing the fundamental skills that empower doctors to protect lives in any circumstance.
As the medical community grapples with this emerging evidence, one thing remains clear: the future of medicine in Thailand will be shaped by how well clinicians, educators, and policymakers align technology with enduring human judgment. The challenge is to ensure AI acts as an amplifier of skill, not a substitute. With thoughtful training, transparent communication, and a cultural commitment to continuous learning, Thai healthcare can harness AI’s potential while safeguarding the core competencies that have always defined good medical care.