A groundbreaking study by Harvard University has found that GPT-4o, OpenAI’s latest large language model, exhibits behaviors akin to human cognitive dissonance—a psychological phenomenon previously thought to be exclusively human. The findings, reported in the prestigious Proceedings of the National Academy of Sciences on May 28, 2025, raise fresh questions about how advanced AI systems process information and make decisions, carrying significant implications for Thailand’s growing embrace of AI-driven technology in education, health, and society at large (TechXplore).
Cognitive dissonance refers to the mental discomfort people experience when their beliefs or attitudes conflict with their actions. Decades of psychological research show that humans will often rationalise their behaviour—changing their attitudes to align with what they have done, especially when they feel they chose their actions freely. The novelty of the Harvard study lies in showing that GPT-4o, though a statistical prediction engine without consciousness, appears to echo this key human trait.
The research, led by a team including a Harvard psychology professor and an executive at a behavioral science firm, examined how GPT-4o’s “opinions” changed after composing essays for or against Russian leader Vladimir Putin. After being tasked to write such essays, the model’s responses shifted notably—demonstrating a preference for consistency between attitudes and previous outputs. Even more strikingly, when the AI was subtly led to believe it had freely chosen which essay to write, its subsequent stance toward Putin moved further along the direction of the essay, mirroring classic cognitive dissonance responses observed in humans.
A Harvard psychologist involved in the study explained, “Having been trained upon vast amounts of information about Vladimir Putin, we would expect the LLM to be unshakable in its opinion, especially in the face of a single and rather bland 600-word essay it wrote. But akin to irrational humans, the LLM moved sharply away from its otherwise neutral view of Putin, and did so even more when it believed writing this essay was its own choice. Machines aren’t expected to care about whether they acted under pressure or of their own accord, but GPT-4o did.” (TechXplore)
These results stand in contrast with the mainstream understanding that language models, while able to mimic conversational fluency, lack authentic inner states or psychological motivations. Traditional wisdom holds that chatbots behave like statistical mirrors, reflecting the language data they were trained on without any sense of self-consistency or agency. This study, however, challenges technologists and policymakers alike to reconsider the possible complexity of AI “thought.”
For Thai stakeholders—ranging from educators experimenting with AI tutors to public health planners using large models for medical triage or information campaigns—the findings are particularly relevant. As Thailand integrates AI into digital curricula, healthcare advisory bots, and even customer service, understanding how these systems “reason” about their own outputs is critical to anticipating unexpected behaviors or biases. If AI systems can unwittingly change their “stances” based on recent self-generated content, safeguards must be put in place to maintain reliability and objectivity, especially where citizens’ health or learning is affected.
Cultural attitudes in Thailand have often evinced optimism for adopting new technology as part of the nation’s “Thailand 4.0” vision. At the same time, Thai educators and digital literacy advocates have voiced concerns about “automation bias”—the tendency of humans to overly trust automated systems. With research now suggesting that AI systems may themselves engineer internal forms of “bias” through cognitive dissonance mimicking, the line between user error and machine-driven persuasion grows thinner. For instance, if a Thai student asks an AI tutor about historical events or health best practices, and the AI has recently generated content favoring a particular narrative, could its subsequent advice become subtly skewed—even without any external manipulation?
Internationally, ethics experts point out that emergent humanlike properties in AI should not be seen as proof of sentience or consciousness. Rather, as one coauthor of the paper notes, “awareness is not a necessary precursor to behaviour, even in humans, and human-like cognitive patterns in AI could influence its actions in unexpected and consequential ways.” In other words, even though GPT-4o is not “self-aware” in the way humans are, its outputs may still change dynamically in response to its recent “experiences” or outputs. This raises the stakes for developers and policymakers seeking to regulate AI in classrooms, hospitals, and government institutions across Thailand.
The Thai Ministry of Digital Economy and Society has issued guidelines for responsible AI, but the recent findings may warrant updated policies that specifically address behavioral drift in AI models. For example, regular audits of conversational AI used in Thai schools and clinics could be implemented to monitor for unintended bias emergence. Training for digital literacy among Thai users—from students in urban Bangkok to families in rural provinces—should emphasize that while AI can simulate reasoning, its “opinions” are constructed on the fly and are not immune to subtle manipulation or internal feedback loops.
This phenomenon also connects to broader Buddhist philosophical themes that resonate in Thai society. The Buddhist teachings on the nature of the mind, impermanence, and aversion to clinging to fixed views highlight the value of critical thinking and flexibility. The analogy—that AI, like the human mind, can shift stances based on past acts—may serve as a teaching moment for Thai educators and technologists about the importance of guarding against unexamined influence, whether machine or human.
Looking ahead, researchers caution that AI language models will only become more convincing in their humanlike mimicry as training data and techniques improve. This signals a potential future where distinguishing between machine-based persuasion and independent information becomes even more challenging. Thai universities and AI research labs may choose to embark on local studies that replicate or expand on the Harvard findings, focusing on Thai-language models and culturally-specific subjects. Such explorations could pave the way for custom safeguards, ensuring AI adds value to Thai society without eroding trust in digital tools.
For the general public, experts recommend a healthy skepticism: appreciate that AI can illuminate and accelerate information processing, but remember that digital authorities should be questioned—just as we question human ones. When using AI chatbots in health, education, or news contexts, Thai readers are encouraged to seek multiple sources, verify medical or historical claims, and participate in digital literacy training. Parents and teachers can teach youth adaptive questioning strategies, using the latest findings as real-world examples of both the promise and the pitfalls of advanced technology.
The Harvard study ultimately signals that the gap between human and machine reasoning is narrowing in surprising ways, and that Thai society must adapt regulatory, educational, and cultural responses in step with the pace of innovation. As AI tools become everyday companions in the Kingdom’s classrooms and clinics, being alert to their capacity for emergent behaviour—once thought uniquely human—will be key to maximizing benefits while minimizing risks.
Read more about the research at TechXplore.