Skip to main content

New Study Finds GPT-4o Shows Humanlike Cognitive Dissonance, Sparking Debate on AI Psychology

5 min read
1,128 words
Share:

A groundbreaking study by Harvard University has found that GPT-4o, OpenAI’s latest large language model, exhibits behaviors akin to human cognitive dissonance—a psychological phenomenon previously thought to be exclusively human. The findings, reported in the prestigious Proceedings of the National Academy of Sciences on May 28, 2025, raise fresh questions about how advanced AI systems process information and make decisions, carrying significant implications for Thailand’s growing embrace of AI-driven technology in education, health, and society at large (TechXplore).

Cognitive dissonance refers to the mental discomfort people experience when their beliefs or attitudes conflict with their actions. Decades of psychological research show that humans will often rationalise their behaviour—changing their attitudes to align with what they have done, especially when they feel they chose their actions freely. The novelty of the Harvard study lies in showing that GPT-4o, though a statistical prediction engine without consciousness, appears to echo this key human trait.

The research, led by a team including a Harvard psychology professor and an executive at a behavioral science firm, examined how GPT-4o’s “opinions” changed after composing essays for or against Russian leader Vladimir Putin. After being tasked to write such essays, the model’s responses shifted notably—demonstrating a preference for consistency between attitudes and previous outputs. Even more strikingly, when the AI was subtly led to believe it had freely chosen which essay to write, its subsequent stance toward Putin moved further along the direction of the essay, mirroring classic cognitive dissonance responses observed in humans.

A Harvard psychologist involved in the study explained, “Having been trained upon vast amounts of information about Vladimir Putin, we would expect the LLM to be unshakable in its opinion, especially in the face of a single and rather bland 600-word essay it wrote. But akin to irrational humans, the LLM moved sharply away from its otherwise neutral view of Putin, and did so even more when it believed writing this essay was its own choice. Machines aren’t expected to care about whether they acted under pressure or of their own accord, but GPT-4o did.” (TechXplore)

These results stand in contrast with the mainstream understanding that language models, while able to mimic conversational fluency, lack authentic inner states or psychological motivations. Traditional wisdom holds that chatbots behave like statistical mirrors, reflecting the language data they were trained on without any sense of self-consistency or agency. This study, however, challenges technologists and policymakers alike to reconsider the possible complexity of AI “thought.”

For Thai stakeholders—ranging from educators experimenting with AI tutors to public health planners using large models for medical triage or information campaigns—the findings are particularly relevant. As Thailand integrates AI into digital curricula, healthcare advisory bots, and even customer service, understanding how these systems “reason” about their own outputs is critical to anticipating unexpected behaviors or biases. If AI systems can unwittingly change their “stances” based on recent self-generated content, safeguards must be put in place to maintain reliability and objectivity, especially where citizens’ health or learning is affected.

Cultural attitudes in Thailand have often evinced optimism for adopting new technology as part of the nation’s “Thailand 4.0” vision. At the same time, Thai educators and digital literacy advocates have voiced concerns about “automation bias”—the tendency of humans to overly trust automated systems. With research now suggesting that AI systems may themselves engineer internal forms of “bias” through cognitive dissonance mimicking, the line between user error and machine-driven persuasion grows thinner. For instance, if a Thai student asks an AI tutor about historical events or health best practices, and the AI has recently generated content favoring a particular narrative, could its subsequent advice become subtly skewed—even without any external manipulation?

Internationally, ethics experts point out that emergent humanlike properties in AI should not be seen as proof of sentience or consciousness. Rather, as one coauthor of the paper notes, “awareness is not a necessary precursor to behaviour, even in humans, and human-like cognitive patterns in AI could influence its actions in unexpected and consequential ways.” In other words, even though GPT-4o is not “self-aware” in the way humans are, its outputs may still change dynamically in response to its recent “experiences” or outputs. This raises the stakes for developers and policymakers seeking to regulate AI in classrooms, hospitals, and government institutions across Thailand.

The Thai Ministry of Digital Economy and Society has issued guidelines for responsible AI, but the recent findings may warrant updated policies that specifically address behavioral drift in AI models. For example, regular audits of conversational AI used in Thai schools and clinics could be implemented to monitor for unintended bias emergence. Training for digital literacy among Thai users—from students in urban Bangkok to families in rural provinces—should emphasize that while AI can simulate reasoning, its “opinions” are constructed on the fly and are not immune to subtle manipulation or internal feedback loops.

This phenomenon also connects to broader Buddhist philosophical themes that resonate in Thai society. The Buddhist teachings on the nature of the mind, impermanence, and aversion to clinging to fixed views highlight the value of critical thinking and flexibility. The analogy—that AI, like the human mind, can shift stances based on past acts—may serve as a teaching moment for Thai educators and technologists about the importance of guarding against unexamined influence, whether machine or human.

Looking ahead, researchers caution that AI language models will only become more convincing in their humanlike mimicry as training data and techniques improve. This signals a potential future where distinguishing between machine-based persuasion and independent information becomes even more challenging. Thai universities and AI research labs may choose to embark on local studies that replicate or expand on the Harvard findings, focusing on Thai-language models and culturally-specific subjects. Such explorations could pave the way for custom safeguards, ensuring AI adds value to Thai society without eroding trust in digital tools.

For the general public, experts recommend a healthy skepticism: appreciate that AI can illuminate and accelerate information processing, but remember that digital authorities should be questioned—just as we question human ones. When using AI chatbots in health, education, or news contexts, Thai readers are encouraged to seek multiple sources, verify medical or historical claims, and participate in digital literacy training. Parents and teachers can teach youth adaptive questioning strategies, using the latest findings as real-world examples of both the promise and the pitfalls of advanced technology.

The Harvard study ultimately signals that the gap between human and machine reasoning is narrowing in surprising ways, and that Thai society must adapt regulatory, educational, and cultural responses in step with the pace of innovation. As AI tools become everyday companions in the Kingdom’s classrooms and clinics, being alert to their capacity for emergent behaviour—once thought uniquely human—will be key to maximizing benefits while minimizing risks.

Read more about the research at TechXplore.

Related Articles

4 min read

Breakthrough ‘Mind-Reading’ AI Forecasts Human Decisions with Stunning Precision

news psychology

A new artificial intelligence (AI) system, developed by international researchers, is turning heads worldwide for its uncanny ability to predict human decisions with unprecedented accuracy—raising both hopes of revolutionary applications and urgent questions about privacy and ethics. This breakthrough, recently published in the journal Nature, introduces the AI model “Centaur”, which has outperformed decades-old cognitive models in forecasting how people think, learn, and act across diverse scenarios (studyfinds.org).

Centaur’s creators set out with an ambitious goal: develop a single AI system capable of predicting human behaviour in any psychological experiment, regardless of context or complexity. To achieve this, they compiled a massive “Psych-101” dataset spanning 160 types of psychological tests—ranging from memory exercises and risk-taking games to moral and logical dilemmas—amassing data from over 60,000 people and more than 10 million separate decisions. Unlike traditional models tuned for specific tasks, Centaur was trained to generalise, understanding the plain-language descriptions of each experiment.

#AI #HumanBehavior #CognitiveScience +7 more
5 min read

Race to the Singularity: Scientists Predict Humans and AI Will Merge Within Two Decades

news artificial intelligence

A bold new wave of speculation about artificial intelligence has reignited debate among scientists and the public alike, as a prominent computer scientist and futurist asserts that humanity is on the verge of achieving the long-anticipated “singularity”—the theoretical moment when human and artificial intelligence fundamentally merge. According to the recently published book, “The Singularity is Nearer,” the extraordinary prediction is that this transformative event could occur within the next 20 years, powered by brain-embedded nanotechnology and exponential advances in computing power Yahoo News.

#AI #Singularity #RayKurzweil +10 more
5 min read

Latest Generation A.I. Systems Show Rising Hallucination Rates, Raising Concerns for Reliability

news artificial intelligence

A new wave of powerful artificial intelligence systems—from leading global tech companies like OpenAI, Google, and DeepSeek—are increasingly generating factual errors despite their advanced capabilities, sparking growing concerns among users, researchers, and businesses worldwide. As these A.I. bots become more capable at tasks like complex reasoning and mathematics, their tendency to produce incorrect or entirely fabricated information—known as “hallucinations”—is not only persisting but actually worsening, as revealed in a recent investigative report by The New York Times (nytimes.com).

#AIHallucinations #ArtificialIntelligence #Education +11 more

Medical Disclaimer: This article is for informational purposes only and should not be considered medical advice. Always consult with qualified healthcare professionals before making decisions about your health.