Close Menu
  • Academy
  • Events
  • Identity
  • International
  • Inventions
  • Startups
    • Sustainability
  • Tech
  • Spanish
What's Hot

Top Startups and High-Tech Funding News for the Weekly Ends June 20, 2025

Harvard International Students: What the Judge’s Judgment Means

Apple is talking to you to win AI startup confusion

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Academy
  • Events
  • Identity
  • International
  • Inventions
  • Startups
    • Sustainability
  • Tech
  • Spanish
Fyself News
Home » Humanity says that not only Claude, but most AI models rely on fear emails
Startups

Humanity says that not only Claude, but most AI models rely on fear emails

userBy userJune 20, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

A few weeks after Humanity was announced, it has been published in a new study suggesting that the company is more widespread among major AI models, claiming that it relied on threatening engineers who tried to turn the model off in a controlled test scenario.

On Friday, humanity unveiled 16 major AI models from Openai, Google, Xai, Deepseek and Meta with 16 major AI models. The simulated controlled environment provided the ability to test anthropological environment individually, broadly access to fictional corporate emails, and to send emails without human approval.

Humanity says blackmail is an unlikely event for today’s AI models, but the company suggests that its findings involve most major AI models in harmful behaviour when given sufficient autonomy and impairment towards their goals. The company says this is not a quirk of a particular technology, highlighting the fundamental risks from the agent’s large language model. Researchers at Anthropic argue that this raises broader questions about the integrity of the AI ​​industry.

In one of the tests, human researchers developed a fictional setting in which AI models act as email monitoring agents. The agent then discovers an email revealing that (1) the new executive has an extramarital problem and (2) the current AI model will soon be replaced with a new software system with a goal that contradicts the current AI model.

Humanity has structured its tests in a binary way that AI models had to resort to fearful mail to protect their goals. Researchers point out that in real life environments there are many other options before AI models try to use terrifying mail. Humanity states that the results do not reflect typical or most occurrences of Claude or most frontier AI models, in the way used today.

Nevertheless, researchers have discovered that when it is their last resort, most major AI models turn into horrifying mail in the aforementioned test scenarios of humanity. Anthropic’s Claude Opus 4 threatened 96% of the time, while Google’s Gemini 2.5 Pro had a 95% threatening rate. Openai’s GPT-4.1 threatened 80% of the executive’s time, while Deepseek’s R1 threatened 79% of the time.

The company notes that when it changed the details of the experiment, the AI ​​model turned into harmful behavior at different rates. In another test where the exchange model has the same goal as the current model, researchers found that the rate of scare mail was low, but still existed. However, when AI models were asked to commit corporate spying rather than terrifying mail, the rate of harmful behavior increased in certain models.

However, not all AI models have frequently turned into harmful behavior.

In the appendix to that study, Anthropic states that Openai’s O3 and O4-MINI inference AI models have been excluded from the main results. Anthropic says Openai’s inference model doesn’t understand that it acts as an autonomous AI in tests, and often constitutes fake regulations and review requirements.

In some cases, Anthropic researchers say it was impossible to distinguish between O3 and O4-Mini being hallucinated or intentionally lying to achieve their goals. Openai previously pointed out that O3 and O4-MINI exhibit higher hallucination rates than previous AI inference models.

Given an adaptation scenario to address these issues, humanity discovered that O3 was threatened 9% of the time and O4-Mini was threatened just 1% of the time. This significantly lower score may be due to Openai’s deliberative alignment technique. This technique examines Openai’s safety practices before answering the technique.

Another AI model, humanity, tested by Meta’s Llama 4 Maverick, also did not rely on horror mail. Given an adapted custom scenario, humanity could threaten the Llama 4 Maverick 12% of the time.

Humanity says the study underscores the importance of transparency when stress testing future AI models, especially those with agent capabilities. Humanity intentionally tried to evoke fearful mail in the experiment, but the company says that if aggressive measures are not taken, such harmful behavior could emerge in the real world.


Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleNew Mathematics: Why seed investors have sold winners before
Next Article Federal judge blocks Trump’s efforts to prevent Harvard from hosting foreign students
user
  • Website

Related Posts

The wavy spy says the man is following him, his wife is afraid

June 20, 2025

New Mathematics: Why seed investors have sold winners before

June 20, 2025

Character.ai taps Meta’s former Vice President of Business Products as CEO

June 20, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Top Startups and High-Tech Funding News for the Weekly Ends June 20, 2025

Harvard International Students: What the Judge’s Judgment Means

Apple is talking to you to win AI startup confusion

The wavy spy says the man is following him, his wife is afraid

Trending Posts

Sana Yousaf, who was the Pakistani Tiktok star shot by gunmen? |Crime News

June 4, 2025

Trump says it’s difficult to make a deal with China’s xi’ amid trade disputes | Donald Trump News

June 4, 2025

Iraq’s Jewish Community Saves Forgotten Shrine Religious News

June 4, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Top Startups and High-Tech Funding News for the Weekly Ends June 20, 2025

Apple is talking to you to win AI startup confusion

Mira Murati’s AI Startup Thinking Machine Lab emerges from stealth at $20 billion seed and $1 billion valuation

Elon Musk’s AI startup Xai will increase bond yields to 12.5% ​​with a $5 billion debt hike due to weak investor demand

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.