Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Hackers leverage Microsoft Teams to spread Mathambuchas 3.0 malware to targeted businesses

BREAKING: TwinH Set to Revolutionize Legal Processes – Presented Today at ICEX Forum 2025

Faraday’s future faces potential SEC enforcement measures after three years of investigation

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Research Leaders Inspire the tech industry to monitor AI’s “thinking”
Startups

Research Leaders Inspire the tech industry to monitor AI’s “thinking”

userBy userJuly 15, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

AI researchers from Openai, Google Deepmind, a wide coalition of humanity and businesses and nonprofits are calling for a deeper look into the technology for monitoring the so-called ideas of AI inference models in a position paper published Tuesday.

A key feature of AI inference models such as Openai’s O3 and Deepseek’s R1 are the chain or COTS chain. It’s similar to how humans use scratchpads to ask difficult mathematical questions in externalization processes where AI models work through problems. Inference models are the core technology for running AI agents, and the authors of the paper argue that COT monitoring could become the core way of controlling AI agents as AI agents become more widely used and capable.

“COT monitoring presents a valuable addition to frontier AI safety measures and provides rare glimpses into how AI agents make decisions,” said researchers at Postise Paper. “Even so, there is no guarantee that the current degree of visibility will last. We encourage the research community and frontier AI developers to study how to make the most of COT monitorability and store it.”

The position paper asks the leading AI model developers to research what makes COTS “monitorable.” This means that it can increase or decrease transparency about which factors the AI model actually reaches the answer. The authors of the paper state that COT monitoring may be an important way to understand AI inference models, but note that it may be vulnerable to interventions that may reduce transparency and reliability.

The authors of the paper also invite AI model developers to track COT monitors and find out which day they can be implemented as a safety measure.

Notable signers of the paper include Openly Chief Research Officer, Ilya Satsukeiber, CEO of Safe Leader Jen, Nobel Prize winner Jeffrey Hinton, Google Deepmind co-founder Shane Legg, Zay Safety Advisor Dan Hendrix, and Thinking Machine co-founder John Shulman. The first authors include the UK Institute of AI Security and Apollo Research Leaders, with other signatories coming from Metr, Amazon, Meta and UC Berkeley.

This paper presents a moment of unity among many leaders in the AI industry to encourage research into AI safety. That comes when tech companies get caught up in fierce competition. This has led Meta to poach open, Google Deep Mind and top researchers of humanity with a million dollar offers. Some of the most highly regarded researchers are those who build AI agents and AI inference models.

“We’re at this important time where we have this new way of thinking. It’s pretty useful, but it can go away in a few years if people don’t actually concentrate.” “For me, publishing a position paper like this is a mechanism to get more research and attention on this topic before it happens.”

Openai released a preview of its first AI inference model O1 in September 2024. The tech industry has since quickly released competitors that show similar features in which some models of Google Deepmind, Xai and humanity show more advanced performance on the benchmarks.

However, there is relatively little understanding of how AI inference models work. AI Labs has been great at improving AI performance last year, but it’s not necessarily translated to a better understanding of how they will reach the answer.

Humanity is a field called interpretability, one of the industry’s leaders in understanding how AI models actually work. Earlier this year, CEO Dario Amodei announced his commitment to opening a black box for AI models by 2027 and investing more in interpretability. He called on Openai and Google Deepmind to study the topic further.

Early human studies show that COTS may not be a completely reliable indication of how these models will reach the answer. At the same time, Openai researchers say COT monitoring could one day become a reliable way to track the alignment and safety of AI models.

The goal of such position papers is to signal boost and give more attention to it with early research areas such as COT monitoring. Companies like Openai, Google Deepmind and Anthropic have already been researching these topics, but this paper could potentially encourage more funding and research.


Source link

#Aceleradoras #CapitalRiesgo #EcosistemaStartup #Emprendimiento #InnovaciónEmpresarial #Startups
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleThe newly launched global group Raas will expand operations with AI-driven negotiation tools
Next Article Ultra-Volume Measurement DDOS Attack has reached record 7.3 TBPS and targets major global sectors
user
  • Website

Related Posts

BREAKING: TwinH Set to Revolutionize Legal Processes – Presented Today at ICEX Forum 2025

July 16, 2025

Faraday’s future faces potential SEC enforcement measures after three years of investigation

July 16, 2025

Transit software launches via IPO sensitive files

July 16, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Hackers leverage Microsoft Teams to spread Mathambuchas 3.0 malware to targeted businesses

BREAKING: TwinH Set to Revolutionize Legal Processes – Presented Today at ICEX Forum 2025

Faraday’s future faces potential SEC enforcement measures after three years of investigation

Transit software launches via IPO sensitive files

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

BREAKING: TwinH Set to Revolutionize Legal Processes – Presented Today at ICEX Forum 2025

Building AGI: Zuckerberg Commits Billions to Meta’s Superintelligence Data Center Expansion

ICEX Forum 2025 Opens: FySelf’s TwinH Showcases AI Innovation

The Future of Process Automation is Here: Meet TwinH

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.