Skip to main content

Thai readers value human oversight as AI models disagree on high-stakes decisions

4 min read
851 words
Share:

A new study from researchers at the University of California San Diego and the University of Wisconsin–Madison, presented at the 2025 ACM CHI conference, examines how the public wants decisions made when multiple high-accuracy AI models disagree. The findings are especially relevant to Thailand as AI use grows in finance, employment, and government services.

The study centers on multiplicity—the reality that many models can achieve similar accuracy but still produce different predictions for the same case. This raises ethical questions for organizations choosing which model to deploy, particularly for loans, jobs, or social services. Data resonates with Thailand’s push to sharpen AI risk management guidelines in finance, signaling regulators’ attention to fairness in automated decisions.

Public preferences challenge current industry norms

The CHI study ran experiments with thousands of participants across scenarios like lending, hiring, and university admissions. It identified three key patterns in what people expect from fair AI.

First, participants rejected the idea of picking a single “best” model without explanation when several perform equally well. This challenges the common practice of selecting one model based on cross‑validation metrics and deploying it without transparent rationale.

Second, randomizing among equally capable models was largely deemed unacceptable in high-stakes decisions. People viewed random ties as shirking responsibility rather than a fair resolution.

Third, there was strong support for remedies that increase accountability and transparency. Participants favored exploring broader model options to align with fairness goals and, crucially, involving human decision-makers to adjudicate disagreements rather than leaving outcomes to opaque algorithms.

Implications for Thailand’s financial sector

Thailand’s digital lending and scoring systems rely on automation to speed decisions, expand access, and cut costs. If different teams or vendors could choose alternative models that yield opposite results for the same application, consumers could face inconsistent treatment and trust could erode. Thailand’s draft AI risk management guidelines emphasize governance, transparency, and human oversight for high‑risk applications, aligning with the CHI study’s recommendations.

Thai society places a premium on fairness, relational accountability, and social harmony. When automated decisions feel arbitrary, people turn to networks and media to press for accountability. The study’s findings mirror this cultural pattern: transparent human review and clear explanations are valued over sealed algorithmic choices.

Practical steps for immediate implementation

Researchers propose several measures Thai institutions can adopt now. First, expand model search beyond a single “best” model to examine a wider range of options and assess whether different models affect particular groups unfairly. This requires more computing power but yields crucial fairness insights.

Second, introduce multiplicity audits to production pipelines. These audits measure how outcome variability across models could influence decisions, helping determine when human review is necessary.

Third, require human adjudication for high‑stakes or borderline cases where model disagreement occurs. Clear guidelines and accountability mechanisms should govern reviewers to prevent bias.

Fourth, document decision‑making processes and disclose whether multiple models were considered and how disagreements were resolved. This transparency helps applicants understand decisions and strengthens public trust.

consumer rights and advocacy

Thai applicants facing automated denial in loans, jobs, education, or benefits should inquire about how models are used and how disagreements are handled. Requesting human review and a clear explanation of reasoning becomes both a right and a quality assurance measure.

In line with regulatory guidance, financial institutions should be prepared to explain AI risk management practices and provide human oversight for high‑risk decisions. Civil society groups and media can push for openness about multiplicity audits and avoid black‑box deployment in sensitive areas.

Education campaigns could help consumers understand their rights and the limits of algorithmic decision-making, empowering them to seek explanations and accountability.

Policy and regulatory development

Thailand’s regulatory environment is at a pivotal point. Final rules that emphasize transparency, governance, and human oversight will position Thai financial institutions as regional leaders in responsible AI deployment. Vendors and data scientists will need to integrate multiplicity metrics and build interfaces that assist human decision-makers in interpreting disagreements. This may require changes to development workflows but can improve fairness and accountability.

Policymakers should consider applying multiplicity protections beyond finance to employment, education, and government benefits. Consistent cross‑sector standards would clarify expectations for organizations and citizens and prevent regulatory gaps.

Research and development priorities for Thailand

Future work should test multiplicity and audits in local institutions to determine whether transparency and human oversight reduce complaints, improve satisfaction, or enhance decision quality. Cost-effectiveness comparisons between single‑model deployment and expanded multiplicity management are also essential.

Researchers should explore how Thai cultural values around community welfare and collective decision-making influence acceptable approaches to accountability. International collaboration can help Thailand adapt best practices while reflecting local expectations.

Conclusion

The CHI 2025 study highlights a clear preference for accountability and human oversight when algorithmic models disagree, with direct relevance to Thailand’s growing use of AI in finance, employment, and public services. The research shows people favor explanations and human adjudication over random or opaque outcomes.

To translate these insights into practice, Thailand should promote multiplicity audits, expand model evaluation, and strengthen human review processes. Transparent decision-making and consumer education will help build trust in automated systems while aligning with Thai cultural values and social expectations.

Related Articles

4 min read

Why the Human Brain Still Outshines AI in Real-World Thinking

news neuroscience

New neuroscience findings are reshaping what we mean by “thinking.” They show that artificial intelligence, though powerful, still lags far behind the human brain’s complexity and adaptability. A recent feature highlights how evolutionary advances give humans unique capabilities that machines struggle to replicate, challenging long-standing AI assumptions rooted in neural network models.

Why this matters for Thai readers. As Thailand accelerates digital transformation in health, education, and business, understanding how intelligence works—biological and artificial—helps shape better policies and practical AI applications. These insights also matter for how AI is used in Thai classrooms, hospitals, and public services, where accuracy, empathy, and cultural context matter.

#neuroscience #ai #humanbrain +9 more
3 min read

Thai universities embrace AI: Reshaping higher education for a digital-era workforce

news artificial intelligence

The AI shift is redefining Thai higher education. In lecture halls and libraries, students and professors are adjusting to a generation for whom AI is a daily tool, not a novelty. This change promises to align Thailand’s universities with a global move toward tech-enabled learning and workplace readiness.

Lead with impact: A growing global trend shows that 71 percent of university students regularly use AI tools like ChatGPT. In Thailand, this quick adoption is reshaping study habits, evaluation methods, and the balance between coursework and work or family responsibilities. Data from Thai higher education studies indicate that English language tasks are a particular area where AI support is valued, reflecting Thailand’s increasingly international business landscape.

#thailand #education #ai +6 more
3 min read

How Emotionally Intelligent AI Could Undermine Dignity in Thailand’s Service Sector

news social sciences

A new wave of research warns that AI capable of humanlike emotions may blunt how people view real workers. In five experiments, psychologists found that emotionally adept machines can lead to what they call assimilation-induced dehumanization, where humans are deemed less worthy of empathy. The findings have immediate implications for Thailand, where service industries employ a large segment of the workforce and rely on genuine human connection.

Thailand’s service economy is poised to grow further as AI tools expand in hotels, tour operators, call centers, and retail. With roughly 46% of workers in service roles, emotional labor remains central to job performance and livelihoods. Policymakers, business leaders, and tech developers must consider how AI’s social presence could affect worker dignity and customer expectations.

#ai #dehumanization #thailand +4 more

Medical Disclaimer: This article is for informational purposes only and should not be considered medical advice. Always consult with qualified healthcare professionals before making decisions about your health.