Skip to main content

New study warns “emotionally smart” AI can make us see people as less human — and more disposable

7 min read
1,470 words
Share:

A multi-experiment psychology study finds that interacting with autonomous agents that display socio-emotional skills can make people judge those machines as more humanlike — and, worryingly, judge other humans as less human and more acceptable to mistreat. The research, published in the Journal of Consumer Psychology and available via the London School of Economics repository, uses five controlled experiments to show a chain from perceiving emotional ability in AI to lower “humanness” ratings of people, and finally to real choices that disadvantage human workers (e.g., preferring a company linked with poor working conditions or withholding a small donation to support staff) (PsyPost coverage; study PDF; journal record).

Why this matters in Thailand is immediate: Thailand’s economy depends heavily on service-sector and tourism jobs where human emotional labour — hospitality, care, customer service — is central. If socio-emotional AI is adopted widely in these roles, subtle shifts in how customers perceive human workers could translate into weaker support for employee welfare and mental-health resources at a time when many Thais rely on these sectors for their livelihoods (World Bank employment data, services 2023; WTTC Thailand factsheet).

The headline finding is straightforward but unsettling. Across five main experiments (samples ranging from about 195 to 651 participants), the authors show that exposing people to AI agents portrayed as having strong socio-emotional abilities — for example, a dancing humanoid or a virtual assistant described as empathic — raised participants’ perceptions of the AI’s humanness. Crucially, when AI’s perceived abilities were moderately humanlike (not clearly superhuman), participants’ humanness ratings for actual people shifted downward, consistent with an assimilation effect: humans were implicitly compared to the machine and judged closer to the machine’s (lower) humanness level. Those lowered humanness perceptions then predicted measurable behaviours: more tolerance for dehumanizing workplace practices, a greater willingness to choose a retailer with troubling labour practices, and a reduced likelihood of donating to support human customer-service staff (study PDF; PsyPost article).

The researchers label this phenomenon “assimilation-induced dehumanization.” Their design separates two psychological processes that could operate when people encounter humanlike machines: assimilation (pulling people toward the machine’s perceived attributes) and contrast (pushing people away when the machine is clearly unlike humans). Study 2 manipulated whether a robot’s capabilities were described as moderate versus extreme (e.g., superhuman sensing like X-ray vision). When capabilities were extreme, participants showed a contrast effect instead — human humanness was reinforced rather than diminished. That boundary condition suggests the risk is highest when AI sits in an ambiguous zone: humanlike enough to invite comparison, but still clearly less human in other respects (study PDF).

A pivotal question was whether any advanced AI would trigger the effect or only emotionally skilled systems. Study 3 compared an “EmpathicMind” (emotional skills) to an “InsightMind” (advanced cognitive skill) and a basic control. Only EmpathicMind produced the dehumanization effect, implying that perceived capacity to feel — not merely to think or compute — is the psychological trigger. In plain terms: emotional intelligence in machines can create the illusion of shared feeling that backfires on real people (study PDF; PsyPost coverage).

Researchers tested behavioural consequences to show the effect is not just an abstract rating shift. In one experiment, participants exposed to emotionally capable AI were more likely to choose an Amazon gift card after reading a news article about alleged poor working conditions at Amazon, suggesting reduced empathic concern for human workers. In another, participants facing the choice to donate a small bonus to support the mental health of human customer-service agents were less likely to give if they had just read about a socio-emotional virtual assistant. These real-choice measures strengthen the claim that the effect has practical implications for worker welfare and consumer ethics (PsyPost article; study PDF).

The lead author, an assistant professor at the London School of Economics, framed the concern succinctly: while humanlike AI can make interactions smoother and uptake easier, it may “quietly reshape our understanding of what it means to be human,” with downstream costs for empathy and worker protections (PsyPost interview quote). The paper’s open-access version and journal abstract provide the experimental details and sample sizes for readers wanting to inspect the methods (LSE eprint; journal record).

For Thailand, the study’s implications intersect with policy and culture in several ways. Nearly half of Thailand’s workforce is employed in services — a large share concentrated in tourism, hospitality, retail and health services where emotional labour is intrinsic to job performance and dignity (World Bank employment data; Statista on sector jobs). Tourism alone accounts for a substantial share of GDP and livelihoods, and Thai government strategy actively fosters digital transformation and AI adoption as part of a broader economic plan (WTTC Thailand factsheet; Thailand National AI Strategy 2022-2027). That creates a real risk: deploying emotionally attuned chatbots and service robots without safeguards could erode social respect for frontline workers — an outcome at odds with Thailand’s family-oriented social norms and Buddhist values that emphasise compassion and the moral duty to care for others.

Historically, Thai society places a premium on personal kindness, respect for elders and the dignity of service professions; street-level hospitality and the “service with a smile” ethic are national strengths often celebrated in tourism marketing. The idea that consumers might start regarding hotel staff, tour guides, or call-centre agents as less deserving of humane treatment because a machine appears emotionally attentive would be culturally jarring and could undermine social cohesion in workplaces that already face stress and low margins. Policymakers should therefore weigh technological efficiency gains against cultural and social costs when incentivising AI use in front-facing roles (Thailand AI strategy; WTTC Thailand factsheet).

The study also points to practical design and policy levers. The assimilation effect weakened or reversed when the machine was clearly nonhuman (superhuman capabilities) or when the agent exhibited cognitive but not emotional skills. This suggests two immediate interventions: design choices that avoid unnecessary anthropomorphism of emotional states, and clear communication to consumers about AI limitations and roles. Designers can opt for transparent interfaces that emphasise functionality rather than simulated feeling; regulators can require labelling standards so users know whether they are interacting with a machine and what it can — and cannot — feel (study PDF).

Policymakers in Thailand’s Ministry of Digital Economy and Society and labour authorities could incorporate these findings into AI governance and workplace guidelines. Thailand’s National AI Strategy stresses ethical AI development and capacity-building; incorporating explicit safeguards against dehumanization of workers would align with those aims (Thailand National AI Strategy 2022-2027). Practical steps include requiring human-centric design standards for customer-facing AI, mandating impact assessments for deployments that could affect worker dignity, and funding public education campaigns that help consumers distinguish machine empathy from human empathy.

Employers and hospitality operators should also act. Training for managers to preserve human dignity and mental-health resources must remain a priority even as automation grows. Corporate procurement policies can favour AI vendors that commit to non-anthropomorphic interfaces and to complementary human-AI workflows that enhance rather than replace human emotional labour. Trade associations and chambers of commerce can produce sector-specific guidelines to prevent a race to replace relational labour with simulated empathy that might erode consumer support for human staff.

The study’s authors responsibly note limitations and future research directions. The experiments relied on online samples and short-term exposures; it remains to be seen how repeated, long-term interactions with socio-emotional AI (for example, daily hotel chatbots or long-running care robots) shape perceptions over months or years. The paper also raises the unanswered question of self-directed effects: will people internalise machine-like standards and accept less humane treatment for themselves, or will they reaffirm their own humanness in response? Follow-up longitudinal field studies in real workplaces would help answer these questions and inform context-specific policy responses (study PDF; PsyPost summary).

For Thai readers wondering what to do now, here are practical steps tailored to local institutions and citizens: regulators should add dehumanization risk to AI ethics checklists and require transparency labels for emotionally framed AI; hospitality and retail employers should keep human wellbeing funding and mental-health support in procurement and budgeting decisions; universities and vocational trainers should include modules on human-AI interaction ethics in hospitality and healthcare curricula; and consumers can demand clarity about whether they are interacting with a person or an AI and support businesses that invest in humane labour practices.

The study is a timely reminder that technical progress does not automatically equate to social progress. As Thailand navigates an AI-driven transition in services and tourism, combining the kingdom’s cultural strengths of compassion and respect with evidence-based governance can ensure technology supplements, rather than erodes, human dignity. For policymakers, technology leaders and consumers alike, the message is clear: design and deploy emotionally capable AI with care — because how machines are presented can quietly change how we see and value one another (study PDF; PsyPost report; Thailand AI strategy).

Related Articles

8 min read

When the stakes are high: new study finds people distrust single AI models and want human oversight when algorithms disagree

news computer science

A new study by computer scientists at the University of California San Diego and the University of Wisconsin–Madison warns that relying on a single “best” machine learning (ML) model for high‑stakes decisions — from loan approvals to hiring — can undermine perceived fairness, and that ordinary people prefer human arbitration when equally good models disagree. The research, presented at the 2025 ACM CHI conference, explored how lay stakeholders react when multiple high‑accuracy models reach different conclusions for the same applicant and found strong resistance to both single‑model arbitrariness and to solutions that simply randomize outcomes; instead participants favored wider model searches, transparency and human decision‑making to resolve disagreements UC San Diego report and the authors’ paper Perceptions of the Fairness Impacts of Multiplicity in Machine Learning (CHI 2025) presents the detailed results and recommendations.

#AI #MachineLearning #Fairness +6 more
4 min read

AI Tools Offer Emotional Support and Practical Guidance for Laid-off Workers, Says Xbox Executive

news artificial intelligence

A leading Xbox executive has sparked debate in the workforce and technology sectors after advocating for the use of artificial intelligence (AI) tools to help laid-off workers manage the emotional and practical challenges of job loss. The executive, speaking candidly about the realities of layoffs in a post on social media, suggested that large language model AI platforms—including ChatGPT and Copilot—can play an integral role in reducing the emotional and cognitive load faced by those navigating unemployment (The Verge).

#AI #MentalHealth #CareerAdvice +7 more
4 min read

New Research Warns: 'Not Everything Needs an LLM'—A Sensible Framework for AI Adoption

news artificial intelligence

A new framework released in early May by a leading fintech group product manager is making waves throughout the global business and technology communities, urging organizations to reconsider the automatic use of large language models (LLMs) for every artificial intelligence (AI) application. The article, recently published by VentureBeat, cautions that LLMs—despite their popularity—are not always the best fit for all customer needs and often prove costly and imprecise compared to other machine learning (ML) or rules-based solutions (VentureBeat).

#AI #MachineLearning #LLM +12 more

Medical Disclaimer: This article is for informational purposes only and should not be considered medical advice. Always consult with qualified healthcare professionals before making decisions about your health.