Machine Learning Fairness: Public Demands Human Oversight When AI Models Disagree
Recent research from the University of California San Diego and University of Wisconsin–Madison reveals critical insights about public expectations for algorithmic decision-making in high-stakes contexts. The study, presented at the 2025 ACM CHI conference, explored how ordinary people react when multiple high-accuracy machine learning models reach different conclusions for identical applications. The findings challenge both current industry practices and academic assumptions about fair automated decision-making, with direct implications for Thailand’s rapidly expanding use of AI systems in financial services, employment, and government programs.