A new study from researchers at the University of California San Diego and the University of Wisconsin–Madison, presented at the 2025 ACM CHI conference, examines how the public wants decisions made when multiple high-accuracy AI models disagree. The findings are especially relevant to Thailand as AI use grows in finance, employment, and government services.
The study centers on multiplicity—the reality that many models can achieve similar accuracy but still produce different predictions for the same case. This raises ethical questions for organizations choosing which model to deploy, particularly for loans, jobs, or social services. Data resonates with Thailand’s push to sharpen AI risk management guidelines in finance, signaling regulators’ attention to fairness in automated decisions.
Public preferences challenge current industry norms
The CHI study ran experiments with thousands of participants across scenarios like lending, hiring, and university admissions. It identified three key patterns in what people expect from fair AI.
First, participants rejected the idea of picking a single “best” model without explanation when several perform equally well. This challenges the common practice of selecting one model based on cross‑validation metrics and deploying it without transparent rationale.
Second, randomizing among equally capable models was largely deemed unacceptable in high-stakes decisions. People viewed random ties as shirking responsibility rather than a fair resolution.
Third, there was strong support for remedies that increase accountability and transparency. Participants favored exploring broader model options to align with fairness goals and, crucially, involving human decision-makers to adjudicate disagreements rather than leaving outcomes to opaque algorithms.
Implications for Thailand’s financial sector
Thailand’s digital lending and scoring systems rely on automation to speed decisions, expand access, and cut costs. If different teams or vendors could choose alternative models that yield opposite results for the same application, consumers could face inconsistent treatment and trust could erode. Thailand’s draft AI risk management guidelines emphasize governance, transparency, and human oversight for high‑risk applications, aligning with the CHI study’s recommendations.
Thai society places a premium on fairness, relational accountability, and social harmony. When automated decisions feel arbitrary, people turn to networks and media to press for accountability. The study’s findings mirror this cultural pattern: transparent human review and clear explanations are valued over sealed algorithmic choices.
Practical steps for immediate implementation
Researchers propose several measures Thai institutions can adopt now. First, expand model search beyond a single “best” model to examine a wider range of options and assess whether different models affect particular groups unfairly. This requires more computing power but yields crucial fairness insights.
Second, introduce multiplicity audits to production pipelines. These audits measure how outcome variability across models could influence decisions, helping determine when human review is necessary.
Third, require human adjudication for high‑stakes or borderline cases where model disagreement occurs. Clear guidelines and accountability mechanisms should govern reviewers to prevent bias.
Fourth, document decision‑making processes and disclose whether multiple models were considered and how disagreements were resolved. This transparency helps applicants understand decisions and strengthens public trust.
consumer rights and advocacy
Thai applicants facing automated denial in loans, jobs, education, or benefits should inquire about how models are used and how disagreements are handled. Requesting human review and a clear explanation of reasoning becomes both a right and a quality assurance measure.
In line with regulatory guidance, financial institutions should be prepared to explain AI risk management practices and provide human oversight for high‑risk decisions. Civil society groups and media can push for openness about multiplicity audits and avoid black‑box deployment in sensitive areas.
Education campaigns could help consumers understand their rights and the limits of algorithmic decision-making, empowering them to seek explanations and accountability.
Policy and regulatory development
Thailand’s regulatory environment is at a pivotal point. Final rules that emphasize transparency, governance, and human oversight will position Thai financial institutions as regional leaders in responsible AI deployment. Vendors and data scientists will need to integrate multiplicity metrics and build interfaces that assist human decision-makers in interpreting disagreements. This may require changes to development workflows but can improve fairness and accountability.
Policymakers should consider applying multiplicity protections beyond finance to employment, education, and government benefits. Consistent cross‑sector standards would clarify expectations for organizations and citizens and prevent regulatory gaps.
Research and development priorities for Thailand
Future work should test multiplicity and audits in local institutions to determine whether transparency and human oversight reduce complaints, improve satisfaction, or enhance decision quality. Cost-effectiveness comparisons between single‑model deployment and expanded multiplicity management are also essential.
Researchers should explore how Thai cultural values around community welfare and collective decision-making influence acceptable approaches to accountability. International collaboration can help Thailand adapt best practices while reflecting local expectations.
Conclusion
The CHI 2025 study highlights a clear preference for accountability and human oversight when algorithmic models disagree, with direct relevance to Thailand’s growing use of AI in finance, employment, and public services. The research shows people favor explanations and human adjudication over random or opaque outcomes.
To translate these insights into practice, Thailand should promote multiplicity audits, expand model evaluation, and strengthen human review processes. Transparent decision-making and consumer education will help build trust in automated systems while aligning with Thai cultural values and social expectations.