A growing controversy over the use of artificial intelligence (AI) in university teaching reached a new peak this week, after a senior at Northeastern University in the United States demanded her tuition be refunded upon discovering her professor’s reliance on ChatGPT to create course materials. The incident, which involved telltale signs of AI-generated notes and images in a business class, has brought fresh attention to the ethical, pedagogical, and financial tensions emerging as generative AI tools become commonplace in higher education.
The dispute began when the student noticed lecture notes posted on the university’s Canvas platform that appeared suspiciously generic, featuring phrases such as “expand on all areas. Be more detailed and specific,” a typical prompt instruction, interspersed among lists of leadership traits and AI-generated images marred by extraneous limbs and erratic spelling. The student subsequently realized that, despite rules forbidding students from using AI for assignments, her professor seemed to be employing the very same technology—a contradiction that prompted her to file a formal complaint to the business school and seek the return of approximately US$8,000 in tuition fees for the course (New York Times, Futurism).
This case is significant for education systems across the globe, including Thailand, where universities are grappling with rapid changes in classroom technology and escalating questions about teaching quality and academic integrity. Amid Thailand’s own education reforms and the expansion of AI-enabled blended learning, the Northeastern scenario vividly illustrates the kinds of challenges that are likely on the horizon for Thai institutions and their students.
Northeastern’s official response denied the tuition refund request, but the incident sparked campus-wide debate and attracted national media coverage. The professor, who later admitted to leveraging ChatGPT, Perplexity, and Gamma AI to “give files a fresh look,” conceded he failed to thoroughly vet the AI-generated content before uploading it for students. He acknowledged the need for greater transparency, saying, “In hindsight, I wish I would have looked at it more closely,” and suggesting that faculty should always disclose when and how AI is used in teaching materials (Futurism, New York Times). Northeastern University has now instituted a formal AI policy requiring attribution and review of AI outputs for “accuracy and appropriateness.”
The case has exposed a growing rift between students, who expect human-led instruction in exchange for rising tuition fees, and educators, who increasingly depend on AI tools to handle heavy workloads and streamline administrative tasks. According to a recent national survey cited by the New York Times, the percentage of U.S. higher education instructors who report frequent use of generative AI has nearly doubled within a year, reflecting a global trend. As one expert interviewed explained, there is “no one-size-fits-all approach” to AI in the classroom, and its responsible use will likely require continual adjustment as both technology and student expectations evolve.
Many professors globally argue that AI tools, when wielded judiciously, can increase efficiency, free up time for meaningful interactions, and improve feedback processes. For example, a Harvard computer science professor created a chatbot to assist with coding assignments, reporting that most students found it helpful for routine questions, allowing staff to focus on deeper learning experiences (New York Times). Meanwhile, a communications professor at the University of Washington built a chatbot trained on previously graded essays to provide students with personalized writing feedback, benefiting those hesitant to seek help in person.
Nonetheless, students and observers say the proliferation of AI-generated materials, especially when undisclosed, risks eroding trust and the perceived value of an expensive education. As tuition fees rise globally—including in Thailand, where parents and students make significant financial sacrifices for tertiary study—the expectation of high-quality, human instruction is intensifying. This anxiety is amplified as more university syllabi, including at the Northeastern class in question, explicitly ban students from using AI, but do not always hold faculty to comparable standards.
Research on AI in higher education highlights both opportunities and drawbacks. A 2025 study in Pakistan found that ChatGPT can enhance students’ classroom performance, aiding research and facilitating understanding of complex topics (Effects of ChatGPT on students’ academic performance). However, experts caution that uncritical or unedited use of AI-generated content can introduce factual errors (“hallucinations”), reinforce biases, and diminish students’ independent thinking—a concern that has been echoed in the Thai context, where memorization and rote learning have long been debated (Bangkok Post).
The ethical landscape is also evolving. AI policy researchers emphasize the importance of transparent AI use in the classroom, with clear guidelines for both teachers and students. One 2025 publication advocates participatory discussions about AI’s risks, benefits, and limitations as key to building students’ “ethical compass,” essential for navigating future workplaces increasingly shaped by automation (Postphenomenological Study: Using Generative Knowing and Science Fiction for Fostering Speculative Reflection on AI-nudge Experience). The Thai Ministry of Education, which is promoting digital literacy and flexible learning pathways, has issued preliminary guidance on AI in classrooms but has yet to adopt comprehensive standards requiring teacher disclosure or attribution of AI assistance.
The implications for Thailand are profound. As local universities implement generative AI for course content, assessment, and personalized learning, clear policies and transparent communication will be crucial for maintaining student trust and upholding educational value. Thai universities will also need to provide professional development for faculty on integrating AI ethically and effectively, as well as updating curricula to incorporate AI literacy and academic integrity for all stakeholders. In Thai campuses, where teacher-student relationships are often shaped by respect and hierarchy, the undisclosed use of generative AI could further complicate perceptions of fairness and authority in education—especially if students perceive a double standard.
This historic episode at Northeastern is not an isolated incident, but signals a larger migration toward AI-powered education—sometimes ahead of clearly articulated standards or mutual understanding between educators and learners. In the short term, students in Thailand and worldwide are likely to become savvier in detecting AI-generated content and more vocal in demanding value and transparency for their tuition baht. Universities, in turn, will face growing pressure to craft policies that specify when, how, and for what purpose AI is appropriate in classroom settings, and to communicate these policies openly.
Looking forward, the Thai higher education sector can learn important lessons from this American case. Policies should require clear disclosure of AI use, develop robust mechanisms for reviewing AI-generated materials for relevance and accuracy, and provide avenues for student feedback when concerns arise. Thai support for AI integration should not undermine the vital teacher-student connection or students’ right to know who (or what) is shaping their education.
As generative AI technology becomes increasingly accessible, Thai readers—students, parents, and educators alike—should insist on transparency and fairness from academic institutions. If you are a student, request clarification from your instructors on their use of AI, and ask for an explanation of your school’s AI policies. As faculty or administrators, invest time in understanding AI’s capabilities and limitations, seek input from your students, and champion open dialogues about change. Everyone in Thai society has a stake in ensuring that the promise of AI enhances—not replaces—the heart of education: the human pursuit of understanding.
Sources: