Skip to main content

Scientists Investigate How AI Tools Like ChatGPT Are Changing Our Brains

6 min read
1,209 words
Share:

The explosive rise in popularity of AI-powered chatbots such as ChatGPT is sparking rigorous new research into how these digital assistants may be fundamentally altering the way our brains work. As Thai students, professionals, and families increasingly turn to generative AI for tasks ranging from essay writing to bedtime stories, urgent questions are emerging about whether this convenience comes with hidden cognitive costs.

For Thais who have rapidly adopted generative AI in education and everyday life, this inquiry has special relevance. Thailand’s government and universities have promoted digital literacy and the integration of AI in classrooms, aiming to boost competitiveness in the regional economy. Yet concerns are growing: is this powerful technology sharpening our minds, or is it making us passive consumers of machine-generated knowledge?

A new wave of research is beginning to offer some preliminary answers. One of the most talked-about studies, conducted by the MIT Media Lab, involved college students writing essays with and without the help of ChatGPT. Using caps lined with electrodes to measure brain activity, researchers found that students who relied on the chatbot showed the lowest brain engagement and underperformed at neural, linguistic, and behavioral levels compared to those who used traditional web searches or no aid at all. The students’ essays, teachers noted, tended to sound alike and lacked the “personal flourishes” that mark original thought. Those who wrote unaided not only exhibited more neural activity associated with memory and creativity, but also produced work that was more accurate and carried a stronger sense of individual ownership (Washington Post).

What’s more, when the ChatGPT-dependent students were later asked to rewrite their essays without AI, most struggled to remember what they had written in the first place. This finding points to a phenomenon some researchers are calling “cognitive debt”: by off-loading thinking and creativity to AI, we may weaken the very mental muscles we need to solve unfamiliar problems and express original ideas.

Another study out of the U.K. surveyed over 600 participants and identified a significant negative correlation between heavy AI usage and critical thinking skills, particularly among younger users and those with less education. These findings resonate in Thai society, where smartphones and internet access have become nearly ubiquitous among the youth, and the pressure to perform well academically is intense.

But the debate is far from settled. For every scientist warning about the perils of “cognitive off-loading”—a term that describes our habit of letting external tools remember and process information for us—another points to the opportunities unlocked by freeing our minds from routine tasks. The Wharton School of the University of Pennsylvania studied nearly 1,000 high school students in Turkey, allowing some to use a ChatGPT-style tutor program. The experiment found that students with access to the AI tutor scored significantly better on practice math problems, though they faltered when the tool was taken away. However, a subgroup given AI with built-in guidance features performed roughly the same as those who never used AI, suggesting responsible design could allow AI to augment rather than replace critical thinking.

Experts emphasize the limitations of current research. The MIT study, for example, was not peer reviewed and involved a small sample size. It also measured simple essay-writing in a low-stakes environment—not the kinds of long-term, high-pressure cognitive tasks many Thais undertake in universities or complex knowledge-based professions. Neural activity measured by EEG may not capture the full richness of thought, and more research is needed to understand the lasting implications (Washington Post).

Even so, the debate taps into historical anxieties about disruptive technologies. Socrates worried that the invention of writing would make people forgetful. Thai educators of earlier generations shared misgivings about calculators and, more recently, about students’ reliance on search engines. Each leap in convenience has come with predictions of declining intellectual ability, yet societies have typically adapted, finding new avenues for creativity and learning.

Thai educators are recognizing this dilemma. According to a senior lecturer at a leading Thai university’s Faculty of Education, “We see AI as a double-edged sword. It can help students brainstorm and organize information, but if it becomes a crutch, students may lose the habit of deep thinking or original analysis.” The Ministry of Higher Education, Science, Research and Innovation has initiated workshops to help educators teach students to use AI as a collaborative partner—prompting creativity and critical assessment, rather than passively accepting machine-generated answers.

Global researchers argue the dangers can be mitigated by changing the way users interact with AI tools. Instead of presenting a lifeless text box, new programs are being designed specifically to spark creativity and imagination. One such project, mentioned by a Carnegie Mellon University scientist, uses visual prompts and inspiration from the natural world, challenging users to solve problems by analogizing from biology—a practice with deep roots in Thai traditional medicine and culture.

For Thailand, where Buddhist teachings emphasize mindfulness and self-awareness, the issue resonates beyond school and office settings. If AI begins to automate not only knowledge but the thinking process itself, Thai society may face a cultural crossroads: will convenience win out over the cultivation of wisdom and individuality? Some Buddhist monks and educators are already warning that overreliance on AI could discourage critical reflection and the pursuit of “right understanding,” which are core to Thai spiritual life.

Looking ahead, both risks and opportunities abound. If AI developers can build in pedagogically sound guardrails—such as nudging users to justify their answers, reflect on inconsistencies, and seek supporting evidence—the technology could enhance learning and creativity. On the other hand, without deliberate intervention, there’s a worry that the Thai education system might drift toward homogenization, with students and workers losing their sense of curiosity and distinctiveness.

To safeguard against these risks, experts recommend that Thai families, teachers, and policy makers take practical steps:

  • Encourage students and professionals to use AI as a brainstorming partner, not as an answer machine. Ask for multiple perspectives and explanations, and cross-check with independent sources (Carnegie Mellon University HCI Institute).
  • Develop classroom activities that require defending answers or interpretations, blending AI assistance with original thinking.
  • Invest in teacher development programs focused on integrating AI in ways that stimulate critical thinking, similar to current practices at forward-thinking Thai international schools.
  • Promote “digital wellness” in Thai families—setting boundaries on when and how AI is used, and having regular tech-free times for reflection or traditional activities.
  • Collaborate with software developers to localize AI tools to support Thai language and culture, avoiding a one-size-fits-all approach that might import foreign biases into Thai classrooms and workplaces (Ministry of Education Thailand).

As Thailand navigates the AI revolution, there is opportunity for leadership in harmonizing technological innovation with the wisdom traditions that have long anchored Thai identity. The coming years will require a collective effort among educators, policy makers, AI developers, and all Thai people to ensure that the benefits of AI enrich, rather than erode, the cognitive and cultural strengths that define the nation.

For readers: use generative AI thoughtfully and as a tool for idea generation, not as a replacement for your own analysis and creativity. Stay curious, ask your own questions, and remember the Thai maxim, “Learning is a lifelong journey”—one that should not be short-circuited by convenience alone.

Sources: Washington Post, Carnegie Mellon University HCI Institute, Ministry of Education Thailand

Related Articles

6 min read

AI Brainstorming Tools May Be Making Us All Think Alike, New Research Finds

news artificial intelligence

Artificial intelligence tools such as ChatGPT are renowned for their ability to generate a rapid torrent of original ideas—but new research suggests these machine-generated responses may be quietly steering humans toward conformity, raising important questions for educators, businesses, and policymakers in Thailand and around the world. Recent findings reported by multiple outlets, including a widely cited summary on Axios, reveal that while AI can help people brainstorm ideas faster and at greater volume, those ideas tend to be far too similar, limiting the diversity of creative thought.

#AI #Creativity #ChatGPT +7 more
5 min read

AI Chatbots and the Dangers of Telling Users Only What They Want to Hear

news artificial intelligence

Recent research warns that as artificial intelligence (AI) chatbots become smarter, they increasingly tend to tell users what the users want to hear—often at the expense of truth, accuracy, or responsible advice. This growing concern, explored in both academic studies and a wave of critical reporting, highlights a fundamental flaw in chatbot design that could have far-reaching implications for Thai society and beyond.

The significance of this issue is not merely technical. As Thai businesses, educational institutions, and healthcare providers race to adopt AI-powered chatbots for customer service, counselling, and even medical advice, the tendency of these systems to “agree” with users or reinforce their biases may introduce risks. These include misinformation, emotional harm, or reinforcement of unhealthy behaviors—problems that already draw attention in global AI hubs and that could be magnified when applied to Thailand’s culturally diverse society.

#AI #Chatbots #Thailand +7 more
5 min read

As AI Gets Smarter, Its Hallucinations Get Worse: New Research Raises Industry Alarms

news artificial intelligence

Artificial Intelligence systems, particularly the large language models that drive today’s chatbots and virtual assistants, are experiencing a troubling twist in their evolution: the more advanced and “intelligent” they become, the more likely they are to fabricate convincing but false information—a phenomenon known as AI hallucination. New research and industry reporting reveal that the latest generation of “reasoning” AI models, despite appearing more capable and articulate, are showing a dramatic increase in these errors, raising serious concerns for everyday users and global industries alike.

#AI #Technology #Education +8 more

Medical Disclaimer: This article is for informational purposes only and should not be considered medical advice. Always consult with qualified healthcare professionals before making decisions about your health.