Skip to main content

Study Finds Short AI Use Can Reduce Doctors' Polyp Detection in Colonoscopy

8 min read
1,745 words
Share:

A new multicentre study found doctors became worse at spotting polyps after short-term AI exposure. The drop raises concern about rapid clinical dependence on AI-assisted tools (Lancet study) (PubMed abstract).

The study analysed colonoscopies at four Polish centres before and after AI introduction. The findings suggest real-world skill changes when clinicians rely on AI prompts (Lancet Gastroenterology & Hepatology study).

The study matters to Thai readers because colorectal screening saves lives. Thailand faces rising colorectal cancer rates that demand effective detection and trained doctors (Current Colorectal Cancer in Thailand).

Researchers compared adenoma detection rates before and after local AI rollout. The adenoma detection rate fell from 28.4% to 22.4% after AI exposure, a six percentage point drop (absolute difference -6.0%) (PubMed abstract).

The study used real clinical records from routine colonoscopies. The data covered 1,443 non-AI assisted colonoscopies before and after AI exposure in the ACCEPT trial (PubMed abstract).

Investigators used multivariable logistic regression to adjust for patient factors. The analysis showed AI exposure independently associated with lower detection odds (PubMed abstract).

The lead clinician described surprise at the finding. He said clinicians seemed to wait for a green box to highlight polyps (NPR report).

An external researcher urged caution in interpreting the results. He warned that three months might be too short to conclude permanent skill loss (NPR report).

The study design was observational and retrospective. Researchers compared three months before and three months after AI implementation in each centre (PubMed abstract).

The study excluded patients with inflammatory bowel disease or prior colorectal surgery. The sample reflected routine diagnostic colonoscopies in participating clinics (PubMed abstract).

The AI tool used real-time video analysis during colonoscopy. The system highlighted suspect regions with a green box to alert endoscopists.

When AI ran, detection rates improved in previous reports. Prior studies showed AI can increase polyp detection when active during procedures.

The new finding was different. The concern focused on performance when AI was absent.

The authors called this phenomenon “deskilling.” They worried that frequent AI assistance might reduce vigilance or visual search skills.

The study noted that age and patient sex also affected detection. Older patients and male patients had higher adenoma detection odds in the analysis (PubMed abstract).

The authors acknowledged limitations in the paper. They said many confounders exist in a real-world rollout.

An external expert pointed to possible statistical variation. He noted that patient mix and unknown ground truth might explain the decline (NPR report).

The lead investigator emphasised he supports AI use. He said AI helps when active and that research should examine behavioural effects.

The study received funding from international agencies. The European Commission and Japan Society for the Promotion of Science supported the work (PubMed abstract).

International reports have shown similar concerns in other imaging fields. Screening mammogram studies found nonexperts performed worse when they expected AI help (NPR report).

AI adoption in medicine is growing fast. Clinicians now use AI in eye scans, breast imaging, and endoscopy.

Thailand is expanding technology use in healthcare. Public and private hospitals explore AI for imaging and workflow improvements.

Colorectal cancer ranks among Thailand’s top cancers. It accounts for about 10% of new cancer cases in Thailand in recent national studies (Current Colorectal Cancer in Thailand).

Thailand faces screening gaps. A 2024 survey found many Thais never had a colorectal screening test, with cost concerns cited by the public (ecancer report).

Low screening rates make each colonoscopy crucial for detection. Reduced clinician vigilance could worsen late-stage diagnoses.

Thailand has limited colonoscopy capacity in some provinces. Regional hospitals carry the screening burden for wider populations (research on screening capacity).

Thai cultural values shape healthcare decisions. Families often prefer doctors with experience and trust authority figures in medical settings.

Thai patients may accept AI tools if clinicians endorse them. Doctor recommendation remains a strong motivator for screening in Thai society.

Any reduction in clinician skill could erode public trust. Patients might worry if headlines suggest doctors depend on machines.

Hospitals in Thailand may face choices about AI deployment. Administrators must weigh efficiency gains against potential behavioural risks.

Medical training in Thailand rarely included AI until recently. Many senior clinicians learned before AI tools emerged.

Training gaps could magnify reliance on AI. Clinicians without formal AI education may rely on visual prompts more than algorithm-literate peers.

Policy makers should plan AI integration. They should combine technology rollout with training and monitoring.

Regulators must require post-deployment evaluation. Ongoing audits can check for unintended changes in clinician performance.

Hospitals should track detection metrics after AI adoption. They should measure outcomes both with and without AI presence.

Teaching hospitals should use simulation training. Simulators can let clinicians practise without AI prompts.

Continuing medical education must include AI literacy. Courses should explain algorithm strengths and failure modes.

Clinical guidelines should define safe AI use. Guidelines can state when to trust AI and when to rely on human judgement.

Health systems should test “dependence risk” before wide rollout. Pilots can reveal short-term behaviour changes.

Manufacturers should design human-centered interfaces. Alerts should support clinician attention rather than replace it.

Researchers should test long-term effects of AI exposure. Studies should follow clinicians for months and years.

Future trials should randomise clinicians to AI training or control. Randomised designs would reduce confounding in behavioural studies.

Thai researchers can replicate this study locally. Local evidence can guide national policy and clinical practice.

Thai hospitals can partner with universities for implementation studies. Collaborative research builds capacity and trust.

Ethics committees must evaluate AI studies carefully. They must consider patient safety and clinician behaviour.

Patients must receive clear information about AI role in care. Informed consent should mention AI involvement when relevant.

Clinicians should maintain manual skills. They should practise detection without AI periodically.

Senior doctors should mentor younger staff on non-AI skills. Experienced clinicians can pass on pattern recognition and inspection habits.

Professional societies in Thailand should issue guidance. Societies can set standards for AI use in endoscopy.

The Ministry of Public Health can issue circulars on AI adoption. Official guidance can harmonise practice across provinces.

Hospitals should monitor patient outcomes after AI adoption. They should report changes in detection and interval cancers.

AI vendors should support post-market surveillance. They should fund independent studies on behavioural effects.

Investors and funders must consider implementation risks. Funding should include evaluation and clinician training budgets.

Media coverage should explain nuance in findings. Reporters should avoid alarmist messages about doctors losing skills.

Clinics should keep some procedures AI-free for skills maintenance. Rotating non-AI shifts can help maintain human vigilance.

Tele-mentoring could support clinicians using AI. Remote experts can advise when AI suggests ambiguous findings.

Community health workers can promote screening uptake. They can explain screening benefits and available options.

Thai public awareness campaigns should stress shared responsibility. Campaigns should ask patients to follow preparation and follow-up advice.

Patients can ask their doctors about AI use. They can request explanations about how AI aids their procedure.

Clinicians should disclose AI involvement during informed consent. Transparency increases patient trust and autonomy.

Professional education should teach cognitive biases linked to automation. Courses should cover “automation bias” and how to avoid it.

Research should identify which clinicians face greatest deskilling risk. Novices and over-reliant users may differ in vulnerability.

Health systems should prioritise patient safety during AI rollout. Safety monitoring must guide expansion decisions.

AI can still reduce missed lesions when used correctly. The technology has clear potential to improve detection rates.

Balanced adoption can capture benefits and limit harms. Thoughtful integration and safeguards can protect patients.

The Lancet study points to a clear signal needing more work. It should prompt broader evaluation and policy action (PubMed abstract).

Thai health leaders must act before widespread, unchecked AI use. They must design policies that suit Thailand’s public health needs.

Hospitals should convene multidisciplinary AI committees. Committees should include clinicians, ethicists, and engineers.

Medical schools should add AI modules to curricula. Early exposure helps future doctors use AI wisely.

Professional licensing boards can require AI competence. Certification can include safe AI use standards.

Regulators should require performance monitoring for AI tools. They should mandate post-market data on clinical and behavioural outcomes.

Insurers and payers can incentivise balanced approaches. They can tie reimbursement to quality metrics rather than AI use alone.

Families in Thailand often share medical decisions. Clear communication about AI can support family-centred care.

Buddhist values in Thailand encourage mindfulness and care. Clinicians can incorporate mindful attention in endoscopy practice.

Respect for authority means doctors should lead on safe AI use. Senior clinicians can model best practices for juniors.

The study offers practical lessons for Thai hospitals. It shows the need for training, monitoring, and cautious rollout.

Hospitals can begin small, monitor results, and scale responsibly. This approach aligns with patient safety principles.

Clinicians can adopt simple habits to avoid automation bias. They can perform deliberate visual scans before checking AI prompts.

Endoscopy units can use quality dashboards to track ADR. Regular feedback drives improvement in detection rates.

The national dialogue about AI in medicine must be inclusive. It should include patients, clinicians, regulators, and vendors.

Thailand can learn from international early adopters. Global experience helps avoid predictable mistakes.

The Lancet study does not ban AI use. It asks for careful implementation and ongoing research (PubMed abstract).

Policymakers should treat AI as a tool, not a replacement. The human clinician must remain central to patient care.

Clinics should create fallback protocols when AI fails. Staff should know steps to follow when AI gives no suggestion.

AI interfaces should encourage clinicians to confirm before acting. Design can nudge careful human verification.

Hospitals should measure patient satisfaction and outcomes together. Good AI use must improve both technical results and patient experience.

Research funders should prioritise behavioural studies. They should allocate funds for implementation science in AI.

Thai journals and conferences should highlight AI safety research. Local dissemination builds awareness among clinicians.

Clinicians should keep clinical reasoning skills sharp. Relying on pattern recognition alone can be risky.

The public must remain engaged in debates about AI in healthcare. Public input strengthens policy legitimacy.

Thailand has an opportunity to build a safe AI roadmap. It can balance innovation with strong patient protections.

The Lancet study signals caution and opportunity together. The finding pushes global medicine to design safer AI integration (PubMed abstract).

Action steps for Thai stakeholders are clear and practical. Training, monitoring, transparent consent, and design changes can reduce dependence risks.

Thailand can lead regional examples of safe AI adoption. Thoughtful policy and local research can set high standards.

Related Articles

5 min read

OpenAI and FDA Talks Signal AI Revolution in Drug Evaluation: What It Means for Healthcare

news artificial intelligence

The US Food and Drug Administration (FDA) is engaging in active discussions with technology company OpenAI as part of a broader push to modernize drug evaluation with artificial intelligence. According to recent reporting by Wired, such collaboration could mark a pivotal shift in how new medicines are reviewed—potentially reducing the time it takes to bring life-saving drugs to market, and setting global trends that are closely watched in Thailand and across Asia Wired.

#AIinHealthcare #DrugApproval #OpenAI +7 more
4 min read

Google DeepMind CEO: Why AI May Replace Doctors, But Nurses Remain Irreplaceable

news artificial intelligence

Recent statements from the CEO of Google DeepMind have stirred debate in the global health community, suggesting that artificial intelligence (AI) could, in the near future, replace many functions carried out by doctors—but not those of nurses. As AI’s role in healthcare evolves rapidly, this commentary raises urgent questions for healthcare delivery, patient experience, and the future of medical professions in Thailand and beyond (nurse.org, livemint.com).

Demis Hassabis, CEO of Google DeepMind, explained his rationale during a recent interview, observing that modern AI is already adept at analyzing vast troves of medical data, interpreting diagnostic images, and recommending treatment protocols. “AI’s remarkable capacity to analyze enormous amounts of medical information—scans, test results, patient histories—means it can often arrive at a diagnosis faster and, sometimes, more accurately than humans,” he reportedly said (nurse.org). Yet, he was unequivocal about the limits of this technology: while AI may someday take over certain physician tasks, it lacks the intrinsic human qualities that make nurses indispensable.

#AIinHealthcare #ThailandHealth #Nursing +4 more
8 min read

New 2025 advice on lowering blood pressure and what Thai families need to know

news health

A major US guideline update offers new advice on preventing and treating high blood pressure.
This report explains the recommendations and what they mean for people in Thailand.

The guideline updates come from the American College of Cardiology and the American Heart Association.
The document aims to help clinicians prevent heart disease, kidney disease and stroke (ACC summary).

The CNN health column invited public questions and summarised practical concerns about blood pressure.
The column highlights common questions about risk, diagnosis and new treatments (CNN).

#ThailandHealth #Hypertension #BloodPressure +7 more

Medical Disclaimer: This article is for informational purposes only and should not be considered medical advice. Always consult with qualified healthcare professionals before making decisions about your health.