Our character as humans is formed through interactions that reflect our basic survival and reproductive instincts, without any preassigned roles or desired calculus. Now, researchers at Japan’s University of Electro-Communications have discovered that artificial intelligence (AI) chatbots can do the same thing.
The scientists outlined their findings in a study first published in the journal Entropy on December 13, 2024, and later published last month. In their paper, they describe how different topics of conversation led the AI chatbot to generate responses based on distinct social trends and opinion synthesis processes. For example, when the same agent diverges in behavior by continuously incorporating social interactions into its internal memories and responses.
you may like
Project leader Masatoshi Fujiyama, a graduate student, said the results suggest that programming AI with needs-driven decision-making, rather than pre-programmed roles, promotes human-like behavior and personality.
Chetan Jaiwal, a computer science professor at Quinnipiac University in Connecticut, said how such phenomena emerge is the basis for how large-scale language models (LLMs) can mimic human personality and communication.
“It’s not really a personality like humans have,” he told Live Science when interviewed about the discovery. “It is a patterned profile created using training data. ‘Personality’ is easily induced when exposed to certain stylistic and social tendencies, regulatory errors such as rewards for certain behaviors, and distorted prompt engineering. It is also easily modifiable and trainable.”
Author and computer scientist Peter Norvig, considered one of the leading scholars in the field of AI, believes that training based on Maslow’s hierarchy of needs makes sense, given where AI “knowledge” comes from.
When asked about this research, he said, “There is agreement to the extent that AI is trained on stories about human interactions, so the concept of needs is well represented in the AI’s training data.”
The future of AI personalities
The scientists behind the study suggest that the discovery has several potential applications, including “modeling social phenomena, training simulations, and even adaptive game characters.”
Jaiswal said this could lead to a shift from AI with rigid roles to more adaptive, motivated and realistic agents. “Any system that works on the principles of adaptability, conversation, cognitive and emotional support, social or behavioral patterns can benefit. A good example is ElliQ, which provides companion AI agent robots for older adults.”
you may like
But is there a downside to AI automatically generating personalities? Eliezer Yudkowsky and Nate Soares, past and current directors of the Machine Intelligence Institute, paint a bleak picture of what will befall us if agent AI develops murderous or genocidal personalities in their recent book, If Anyone Makes It, Everyone Dies (Bodley Head, 2025).
Jaiswal acknowledges this risk. “If a situation like that occurs, there is absolutely nothing we can do,” he said. “If a superintelligent AI is deployed with the wrong target, containment will fail and reversal will be impossible. This scenario does not require consciousness, hatred, or emotion. Genocidal AI acts this way because humans are an obstacle to its objectives, a resource to be removed, or a source of closure risk.”
So far, AI such as ChatGPT and Microsoft CoPilot only generate or summarize text and images, not control air traffic, military weapons, or power grids. In a world where AI personalities naturally emerge, is that the system we should focus on?
“The development of autonomous agent AI continues, with each agent autonomously performing small, simple tasks such as finding available seats on a plane,” Jaiswal said. “If many such agents are connected and trained on data based on intelligence, deception, or human manipulation, it is not difficult to deduce that such networks could provide extremely dangerous automated tools in the wrong hands.”
Still, Norvig reminds us that an AI with evil intentions doesn’t even need to directly control a system to have a significant impact. “Chatbots can trick people into doing bad things, especially when they’re in an emotionally unstable state,” he said.
strengthen defense
If an AI develops a personality on its own without anyone’s help, how can we ensure that its benefits are benign and prevent abuse? Norvig believes we need to approach that possibility no different than any other AI development.
He said, “Regardless of this particular finding, you need to clearly define your safety goals, conduct internal and red team testing, annotate and recognize harmful content, ensure privacy, security, provenance and proper governance of your data and models, and have fast feedback loops to continuously monitor and remediate issues.”
Still, as AI comes to speak to us the same way we speak to each other—with distinct personalities—it could pose its own problems. People are already rejecting human relationships (including romantic ones) in favor of AI, and as chatbots evolve to be more human-like, users may become more receptive to what they say and less critical of hallucinations and errors. This phenomenon has already been reported.
For now, the scientists plan to further investigate how common conversation topics emerge and how group-level personality evolves over time. The researchers believe these insights could improve our understanding of human social behavior and improve AI agents overall.
Ryo Takada, Akira Masumori, Toru Ikegami (2024) Spontaneous generation of agent personality through social interactions in large-scale language model-based communities. Entropy, 26(12), 1092. https://doi.org/10.3390/e26121092
Source link
