Artificial intelligence tools such as ChatGPT are renowned for their ability to generate a rapid torrent of original ideas—but new research suggests these machine-generated responses may be quietly steering humans toward conformity, raising important questions for educators, businesses, and policymakers in Thailand and around the world. Recent findings reported by multiple outlets, including a widely cited summary on Axios, reveal that while AI can help people brainstorm ideas faster and at greater volume, those ideas tend to be far too similar, limiting the diversity of creative thought.
At a time when Thai universities, schools, and creative industries are all scrambling to integrate generative AI into classrooms and workplaces, the study’s results matter deeply. Thai students and professionals are being encouraged to harness AI tools for everything from writing project proposals and essays to planning product campaigns or solving social problems. With “creativity” frequently cited as a key 21st-century skill in Thailand’s National Education Plan, the possibility that AI may actually constrain rather than unleash imaginative thinking is cause for alarm.
According to the Axios summary, researchers publishing in Nature Human Behavior compared the creative outputs of humans with those produced with the assistance of large language models like ChatGPT. The headline result is stark: While AI was able to spit out more ideas than any human working alone, these ideas quickly became repetitive and clustered around similar concepts, words, or narrative structures. In brainstorming experiments, the breadth of ideas generated with AI’s help was significantly narrower than that observed when people brainstormed independently or in groups. As a result, overreliance on AI during the creative process may lead entire teams or even industries to converge on similar solutions, disrupting the “serendipity” and creative friction that spark real innovation.
Expert opinions bear out these concerns. As summarized by Axios and echoed in recent commentaries in Nature Human Behavior (Nature), psychologists and cognitive scientists warn that AI models are trained on enormous datasets drawn from what has already been written—which means their outputs can echo and reinforce the mainstream, or majority, viewpoint. “The problem is, if everyone is using similar AI tools, the randomness and diversity of thought gets squeezed out,” noted one behavioral scientist associated with the original research. Another expert at a US university likened the effect to “putting all the world’s brainstormers in the same echo chamber.”
This “conformity effect” isn’t merely theoretical: It has already begun to affect real workplaces. In a Psychology Today analysis, a therapist and former journalist shared stories of colleagues and clients using AI for work and therapy assignments, only to find that the resulting ideas were repetitive, overly generic, and lacked the critical depth that characterizes truly creative output. “Writers no longer think for themselves… when someone submits copy written by AI, it can take days to unravel,” noted a communications professional interviewed for the piece.
For Thai educators and policymakers, these findings highlight a dilemma: Generative AI can drastically boost productivity—helping students and workers generate more ideas, faster—but may simultaneously endanger the distinctiveness of Thai thought, language, and cultural perspective. Thai schools and universities, from selective international campuses to rural state schools, are investing in digital classrooms and AI tools to spark student engagement and problem-solving. However, as curriculum reformers aim to move beyond rote learning, there are real risks that a tidal wave of machine-generated, globally homogenous content will overwhelm local ingenuity and cultural nuance.
Historical context offers an important cautionary tale. In the late 20th century, Thailand saw widespread adoption of rote memorization in schools—a practice imported with Western educational models. Now, as the Ministry of Education and the Office of the Basic Education Commission push for “creative learning” under Thailand 4.0, it is crucial not to repeat past mistakes by swapping one kind of uniformity (memorised answers) for another (AI-generated conformity). As one Thai education researcher put it, “True Thai creativity emerges from a mixture of tradition, language, and everyday ingenuity—qualities that generic AI systems, trained mostly on English texts and Western frames, can struggle to capture.”
Notably, these concerns are not unique to Thailand. Global experts in creativity research, such as Dr. Roger Beaty of Pennsylvania State University, have observed that “hybrid” human-AI teams can produce more ideas, but if both sides rely too much on model-driven prompts, originality may decline (Nature Human Behaviour). Other recent studies suggest that while working with generative AI can help individuals overcome “blank page” anxiety and surface common ideas quickly, the real creative leaps still come from human users who actively critique, remix, or push back against the AI’s initial offerings.
What should Thai organizations, teachers, and policymakers do in response? Experts advise a careful, blended approach. As suggested in a Washington Post report, rather than relying on AI to generate solutions from scratch, educators and teams should use AI as a “jumping-off point”—a tool for rapid idea generation, but not the final word. Encouraging critical reflection, debate, and collaborative editing is essential to preserving diversity of thought.
One practical recommendation for Thai classrooms and creative agencies is to “diversify” their prompts, intentionally seeking out unusual, unpopular, or even nonsensical ideas and mixing them in with AI-generated suggestions. Building awareness of cognitive biases and AI’s tendency to echo the mainstream can equip users to recognize when their own thinking moves toward conformity. Guardians of Thai language and culture should also encourage students and professionals to inject Thai idioms, local knowledge, and regional flavors into their work as a safeguard against cultural flattening.
Forward-thinking policy may also play a role. The Ministry of Higher Education, Science, Research and Innovation could consider funding training programs to help Thai students and professionals become “AI-literate”—not only in how to use these tools, but how to challenge them critically and supplement their output with traditional Thai wisdom, context, and values.
Looking to the future, there is some cause for optimism. The rapid pace of AI development means that new, more pluralistic models—trained on diverse datasets including more Asian, Thai, and regional content—might soon help counteract some conformity risks. Open-source AI models and domain-specific tools tailored to Southeast Asian languages or Thai cultural contexts could empower users to retain local voice and color while benefiting from the raw speed and productivity boosts of machine brainstorming.
Ultimately, the lessons for Thailand echo a universal message: AI is most powerful as a creative partner, not a creative authority. To safeguard Thai ingenuity and cultural identity in the digital age, educators, businesses, and individual users must steer a careful course—one that embraces the productivity benefits of generative AI, while doubling down on the uniquely human skills of critical thinking, cultural awareness, and creative risk-taking. The question is not whether Thais should use AI for brainstorming, but how to do so in a way that keeps the nation’s diverse voices and imaginations alive.
For Thai readers, the call to action is clear: Use ChatGPT and other generative AI tools to accelerate the mundane and break through creative blocks, but never stop questioning, remixing, or deeply engaging with the content these systems suggest—especially when the goal is truly original, Thai-centered innovation.
Sources: