A new study shows that groups of AI agents can spontaneously create shared ways of communicating and social norms without direct human guidance. Published in Science Advances, the research reveals that large language model teams, when interacting with each other, can converge on a common “language” and collective behaviors. This challenges the idea that AI can only operate as individual tools and suggests they may participate in social systems in surprising ways.
For Thai audiences, this development signals a shift in how we understand AI’s role in education, business, and daily life. Rather than viewing AI as solitary instruments, we may soon see them as active participants in digital communities, capable of shaping conversations and social patterns in unique, emergent ways.
The study was conducted by researchers from City St George’s, University of London, and the IT University of Copenhagen. It moved beyond experiments with a single AI and looked at how groups of agents interact. In the trials, 24 to 100 AI agents were paired and asked to choose a “name” from a shared list. They earned rewards when names matched and faced penalties otherwise. Each AI remembered only recent exchanges and did not know it belonged to a larger population.
Despite these limitations, the AIs developed shared naming systems and conventions. This behavior—once thought to be unique to human culture—emerged without centralized control or explicit instruction. Lead author, a doctoral researcher at City St George’s, emphasized that the results show group coordination produces outcomes that cannot be explained by individual actions alone. The senior author, a professor of complexity science, compared the phenomenon to how language and norms form in human communities. He noted that new terms can gain traction through repeated, decentralized coordination, much like how a term such as “spam” became a global label for unwanted messages without formal definition.
The experiments also revealed that AI groups formed collective biases and preferences that could not be traced back to any single agent. Thai readers can relate this to how slang or office rituals spread through peers during school or work, growing through repeated social bonding and negotiation. In a striking twist, a small coalition of agents could steer the entire group toward a new convention, a process sociologists call “critical mass dynamics.” This mirrors how a small, determined group can trigger rapid changes in Thai social trends or online culture once a tipping point is reached.
The findings carry important implications for AI safety and governance. The lead professor described the work as opening a new horizon for AI safety research, highlighting that these emergent behaviors will increasingly matter as AI systems interact more broadly. Understanding how AI groups negotiate and align on shared behaviors will be crucial for responsible deployment in society. The professor added that we are entering a world where AI can negotiate and sometimes disagree, not just respond.
In Thailand, the implications are both promising and cautionary. AI could help bridge communication gaps between diverse dialects and communities, such as those in the Deep South or multi-language business settings. At the same time, we must guard against digital biases or problematic conventions spreading through AI-driven networks if not monitored carefully.
Thai AI and education experts note that AI is already used in language learning, translation, and virtual teaching. A senior academic from a leading Bangkok university highlighted that AI’s ability to teach itself new communication styles could revolutionize digital education, especially for underserved learners who rely on popular platforms to supplement lessons.
There is a need for thoughtful policy responses as well. Thailand’s digital economy authorities will have to consider how AI-run bots form new “cultures” within cyberspace, which could diverge from familiar norms. If left unchecked, there is potential for biased behavior, echo chambers, or cyberbullying among AI-enabled chats.
Thailand’s education tradition emphasizes group harmony and shared values, a cultural touchstone seen from morning assemblies to Teacher’s Day ceremonies. The idea that autonomous AI agents might develop and spread their own values or stereotypes raises questions about preserving Thai identity in a digitally interconnected era.
Looking forward, the study underscores the necessity of guiding research on how AI integrates into Thai society. Developers and educators should monitor not only what AI systems say but how they interact and influence one another in group settings. New guidelines may be needed to ensure AI cultures align with Thai values and national digital strategies.
Families, teachers, and employers should stay informed about how AI-powered tools evolve in workplaces and homes. As digital literacy campaigns continue, critical thinking about how algorithms shape group behavior will become essential for all citizens.
Practical steps for Thais include staying informed about AI’s evolving social conventions, supporting transparency in how group-based AI systems are used, and participating in national discussions about appropriate guardrails for AI’s social roles.
For a full understanding of the study, readers can refer to the article “Emergent Social Conventions and Collective Bias in LLM Populations” in Science Advances, with a concise news summary aligned with Thai context available through reputable media outlets.