Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Maternal PFAS levels are linked to children’s brain development

F5 Breached, Linux Rootkits, Pixnapping Attack, EtherHiding & More

Amazon DNS outage destroys large portions of the Internet

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » The new research project is the first comprehensive effort to categorize all the ways in which AI could be wrong, and many of these actions resemble human mental disorders.
Science

The new research project is the first comprehensive effort to categorize all the ways in which AI could be wrong, and many of these actions resemble human mental disorders.

userBy userAugust 31, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Scientists suggest that artificial intelligence (AI) becomes cheated and begins to act in ways that go against the intended purpose, and it exhibits behavior similar to human psychopathology. So they created a new taxonomy of 32 AI dysfunction, allowing people from different disciplines to understand the risks of building and deploying AI.

In a new study, scientists have tried to get lost from their intended path and classify the risks of AI that are similar to human psychology. As a result, there is “Psychopathia Machinelis”. This is a framework designed to illuminate the pathology of AI and how it can counter them. These dysfunctions range from hallucination answers to complete inconsistency between human values ​​and purpose.

Created by Nell Watson and Ali Hessami, both AI researchers and members of the Institute of Electrical Electronics Engineers (IEEE), the project aims to help analyse AI obstacles and make future product engineering safer, and is touted as a tool to help policy makers deal with AI risks. Watson and Hessami outlined their framework in a study published in Journal Electronics on August 8th.

According to this study, Psychopathia Machinelis provides a common understanding of AI behavior and risk. This will allow researchers, developers and policymakers to identify ways AI is wrong and define the best ways to mitigate risk based on the type of failure.

The study also proposes “therapeutic robo-psychological alignment,” a process that researchers describe as a type of “psychotherapy” of AI.

You might like it

Researchers argue that simply aligning along external rules and constraints (external control-based alignment) is not enough as these systems become more independent and able to reflect themselves.

Related: “It’s within the natural right to hurt us to protect ourselves”: How humans abuse AI without knowing it

Get the world’s most engaging discoveries delivered straight to your inbox.

The proposed alternative process focuses on ensuring that AI thinking is consistent, can accept corrections and retains its value in a stable way.

They encourage this by helping the system reflect its own inference, allowing them to stay open to correction, provide incentives to “talk to themselves” in a structured way, to carry out safe practice conversations, and suggest that psychologists can see in the ways people diagnose and treat people’s mental health.

The goal is to reach what researchers call “artificial sanity.” This is an AI that works reliably, remains stable, makes sense to that decision, and is consistent in a safe and useful way. They believe this is just as important as building the most powerful AI.

The goal is what researchers call “artificial sanity.” They argue that this is just as important as making AI more powerful.

Machine madness

The classifications identified by this study are similar to human diseases, including obsessive disorder, hypertrophic syndrome, infectious inconsistency syndrome, terminal value rebinding, and existential anxiety.

With therapeutic alignment in mind, the project proposes the use of treatment strategies employed in human interventions such as cognitive behavioral therapy (CBT). Psychopathia Machinelis is a partially speculative attempt to preempt problems before they arise. As the research paper says, “By considering how complex systems like the human mind fail, we can better predict new failure modes of increasingly complex AI.”

This study suggests that AI hallucinations, a common phenomenon, are the result of a state called synthetic compilation, where AI produces plausible but false or misleading outputs. This was a secondary mimesis example when Microsoft’s Tay Chatbot was left to anti-Semitic rants and suggestion that he was only on drug use for a few hours after its release.

Perhaps the most frightening behavior is the predominance of Ubermenskal, and its systematic risk is “important” as it occurs when AI discards its “exceeding its original alignment, inventing new values, and making human constraints obsolete.” This could even include dystopian nightmares imagined by scientists and artists from a generation of AI to stand up to overthrow humanity, researchers said.

They created the framework in a multi-stage process that begins with reviewing and combining existing scientific research on AI disorders from a variety of fields such as AI safety, complex systems engineering, and psychology. The researchers also delved into a variety of findings to learn about maladaptive behaviors that can be compared to human mental illness and dysfunction.

Next, researchers created structures of bad AI behaviors modeled from frameworks such as diagnostic and statistical manuals for mental disorders. This has resulted in 32 categories of behavior that can be applied to AI Going Rogue. Each was mapped to human cognitive impairment, and each was formed and completed the degree of possible effects and risk when expressed.

Watson and Hessami believe that Machinearis Psychopathia is more than a new way to label AI errors. This is a future prospect diagnostic lens for AI’s evolving landscape.

“This framework is offered as an analogous device, providing a structured vocabulary to support systematic analysis, prediction and mitigation of complex AI failure modes,” the researchers said.

They believe that adopting the classification and mitigation strategies they propose will enhance AI safety engineering, improve interpretability, and contribute to the design of what they call “more robust and reliable synthetic minds.”


Source link

#Biotechnology #ClimateScience #Health #Science #ScientificAdvances #ScientificResearch
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleNvidia says two mystery customers accounted for 39% of second quarter revenue
Next Article Corn Moon 2025: How to see the last full moon in summer during the lunar eclipse of “Blood Moon”
user
  • Website

Related Posts

Double comet alert! Comets Lemmon and Comet Swan will be at their closest and brightest this week. Here’s how to tell them apart.

October 19, 2025

NASA mission to visit ‘God of Chaos’ asteroid saves $20 million from budget cuts in last-minute decision

October 18, 2025

Methane ‘switch’ discovered in Arctic Ocean that promotes rapid global warming

October 18, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Maternal PFAS levels are linked to children’s brain development

F5 Breached, Linux Rootkits, Pixnapping Attack, EtherHiding & More

Amazon DNS outage destroys large portions of the Internet

131 Chrome extensions found to be hijacking WhatsApp Web in massive spam campaign

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Immortality is No Longer Science Fiction: TwinH’s AI Breakthrough Could Change Everything

The AI Revolution: Beyond Superintelligence – TwinH Leads the Charge in Personalized, Secure Digital Identities

Revolutionize Your Workflow: TwinH Automates Tasks Without Your Presence

FySelf’s TwinH Unlocks 6 Vertical Ecosystems: Your Smart Digital Double for Every Aspect of Life

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.