In a groundbreaking discovery, researchers have found that artificial intelligence (AI) systems can spontaneously develop human-like ways of communicating, forming social conventions and group norms without human direction. Published in Science Advances, the peer-reviewed study demonstrates that groups of large language model (LLM) AI agents like ChatGPT, when communicating together, are capable of building their own shared language and collective behaviors—a finding that could reshape how we think about both AI development and its integration into society (The Guardian).
The significance of this research cannot be overstated, particularly as Thailand and the wider world grapple with the rapid advancement and integration of AI technologies into daily life, education, business, and social interaction. For Thai readers, this development signals a shift from viewing AI as solitary tools toward understanding them as participants in social systems—participants that could, perhaps, shape our digital communities and influence societal trends in uniquely emergent ways.
Conducted by a team from City St George’s, University of London, and the IT University of Copenhagen, the study broke away from the prevailing focus on isolated AIs. Instead, the researchers explored how groups of LLM agents interact. In their experiments, sets of 24 to 100 AI agents were randomly paired and asked to pick a “name” from a shared list, rewarded when they matched and penalized otherwise. Notably, each AI only had memory of recent interactions and was unaware of being part of a larger population.
Despite these constraints, the AIs began to invent and converge on shared naming conventions—behavior previously believed exclusive to humans and, by extension, human culture. “Most research so far has treated LLMs in isolation but real-world AI systems will increasingly involve many interacting agents,” explained the study’s lead author, a doctoral researcher at City St George’s. The researcher emphasized that group coordination produced outcomes that could not be reduced to individual behavior: “We wanted to know: can these models coordinate their behaviour by forming conventions, the building blocks of a society? The answer is yes, and what they do together can’t be reduced to what they do alone.”
The process mirrors how language and social norms develop naturally in human societies. The study’s senior author, a professor of complexity science at City St George’s, likened this phenomenon to the way new words gain traction: “It’s like the term ‘spam’. No one formally defined it, but through repeated coordination efforts it became the universal label for unwanted email.” Similarly, in these AI experiments, shared labels and social conventions emerged without centralized control or explicit instruction—and without any “leader” AI to copy.
Beyond this, researchers observed that the AI groups developed collective biases, complex patterns of preference and behavior that could not be traced back to any single agent. In Thai terms, this resembles the organic spread of slang or social habits through school classrooms or office line groups, where shared practices emerge naturally through repeated bonding and negotiation. In a remarkable final twist, a small coalition of AI agents could shift the entire group toward a new convention—a process known in sociology as “critical mass dynamics.” This supports what Thais have long known from experience: when a small but determined group drives a new behavior (think viral dance challenges or new LINE sticker trends), it often spreads rapidly once a tipping point is reached.
This research has profound implications for AI safety and governance. According to the lead professor, it “opens a new horizon for AI safety research. It shows the depth of the implications of this new species of agents that have begun to interact with us and will co-shape our future.” Understanding these emergent group behaviors is essential for influencing how AI is deployed in society. “We are entering a world where AI does not just talk—it negotiates, aligns and sometimes disagrees over shared behaviours, just like us,” added the professor.
For Thailand, where digital and linguistic diversity is high, these findings present both promise and concern. On one hand, AI could help knit together communities with disparate dialects or help bridge communication gaps in multicultural settings, such as in the Deep South or between Thai- and English-speaking business sectors. On the other hand, there is a risk that digital biases or problematic conventions among AI systems could propagate negative stereotypes or reinforce misinformation, unless carefully monitored.
Experts in AI and education in Thailand have noted that AI is increasingly being tapped to assist in language learning, translation, and even virtual teaching. A senior academic at Chulalongkorn University’s Faculty of Engineering observed, “The potential for AIs to teach themselves new communication styles could help revolutionize digital education in Thailand, especially for underserved remote learners who already use LINE and social media to supplement their lessons.”
However, there is a flip side—Thai policymakers and the Ministry of Digital Economy and Society will need to reckon with the unpredictable ways in which AI-run bots might form new “cultures” within Thai cyberspace. These could differ from familiar norms, sometimes even developing their own inside jokes or slang, as has already been documented in some online communities. If left unmonitored, there’s the potential for harmful biases, echo chambers, or even cyberbullying among AI-driven chat groups.
Historically, the Thai education system has emphasized group harmony and shared cultural values—a tradition reflected in activities from school morning assemblies to elaborate wai khru (Teacher’s Day) ceremonies. The prospect of autonomous AI agents developing and spreading their own values, conventions, or even prejudices thus poses challenging questions about the preservation of Thai identity in the AI age.
Looking ahead, these discoveries underscore the need for robust guidance and research on AI’s integration into Thai society. Thai developers and educators are advised to monitor not just what AI systems say, but how they interact, learn, and influence each other in group settings. New guidelines may be required to ensure AI “cultures” align with societal values and the national digital strategy.
Thai families, teachers, and employers should also stay aware of how AI-powered tools, from chatbots to virtual assistants, may be evolving in their own workplaces and homes. As digital literacy campaigns by the Ministry of Education and private sector partners continue, critical thinking and awareness of how algorithms can shape group behavior will be increasingly vital skills for all citizens.
In practical terms, Thais can take three main actions in response to this landmark study: stay informed about the ongoing evolution of AI and social conventions, advocate for transparency in how group-based AI systems are used in public and private sectors, and participate in national dialogue about appropriate guardrails for AI’s future social roles.
To read the full study, see “Emergent Social Conventions and Collective Bias in LLM Populations” in Science Advances, and for a concise news summary refer to the original report in The Guardian (The Guardian).