Artificial intelligence (AI) systems can form their own societies with unique linguistic norms and conventions, similar to human communities, when left alone to interact, a study published in Science Advances has shown. Scientists from City St George's, University of London and the IT University of Copenhagen conducted the research to understand how large language models (LLMs) that power AI tools interact with each other.
"Most research so far has treated LLMs in isolation but real-world AI systems will increasingly involve many interacting agents," said lead author Ariel Flint Ashery, a doctoral researcher at City St George's.
"We wanted to know: can these models coordinate their behaviour by forming conventions, the building blocks of a society? The answer is yes, and what they do together can't be reduced to what they do alone."
For the experiment, researchers used a naming game where AI agents had to choose names from a set and received rewards for picking the same ones. Over time, the AI agents developed shared conventions and biases without explicit coordination.
The AI agents mimicked the habit of humans who also tend to conform to similar norms. Researchers added that it was possible for a small group of AI agents to push a larger group towards a particular convention -- a trait often observed in human groups.
"Our results show that AI systems can autonomously develop social conventions without explicit programming and have implications for designing AI systems that align, and remain aligned, with human values and societal goals," the study highlighted, adding that results were robust when using four different types of LLMs, viz. Llama-2-70b-Chat, Llama-3-70B-Instruct, Llama-3.1-70BInstruct, and Claude-3.5-Sonnet.
Also Read | US Student Demands College Tuition Fee Refund After Catching Professor Using ChatGPT
Ethical dangers
The researchers stated that their work could be used to combat some of the ethical dangers posed by LLM AIs propagating biases fed into them by society.
"This study opens a new horizon for AI safety research. It shows the depth of the implications of this new species of agents that have begun to interact with us and will co-shape our future," said Andrea Baronchelli, a senior author of the study.
"We are entering a world where AI does not just talk-it negotiates, aligns, and sometimes disagrees over shared behaviours, just like us."
Mr Baronchelli added that understanding how the AI systems operate will be the key to "our coexistence" rather than being subject to it.