Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

In Varda Space, major players in Silicon Valley make big bets on making drugs in space

A critical MCP-Remote vulnerability allows remote code execution, affecting over 437,000 downloads

As X loses CEO, daily use is decreasing and competition is growing

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » California legislators behind SB 1047 raise mandatory AI safety reports
Startups

California legislators behind SB 1047 raise mandatory AI safety reports

userBy userJuly 9, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

California Sen. Scott Wiener introduced a new amendment to his latest bill, SB 53, on Wednesday.

If signed into law, California will be the first state to impose meaningful transparency requirements on leading AI developers, including Openai, Google, Humanity, and Xai.

Senator Wiener’s previous AI bill, SB 1047, contained similar requirements for AI model developers to publish safety reports. However, Silicon Valley fought fiercely against the bill, and was ultimately rejected by Gov. Gavin Newsom. The California governor then sought a group of AI leaders, including FEI-FEI LI, a leading researcher at Stanford University and co-founder of the World Lab, to form a policy group and set goals for the state’s AI safety efforts.

California AI Policy Group recently published its final recommendations, citing the need for “industry requirements for publishing information about industrial systems” to establish a “robust and transparent evidence environment.” Senator Wiener’s office said in a press release that the SB 53 amendment was heavily affected by the report.

“This bill continues to be an ongoing work and we look forward to working with all stakeholders in the coming weeks to narrow this proposal down to the most scientific and fair law,” Senator Winner said in the release.

SB 53 aims to balance the SB 1047 claimed that SB 1047 failed to achieve. Ideally, create meaningful transparency requirements for the largest AI developers without hampering the rapid growth of California’s AI industry.

“These are concerns that my organization and others have been talking about for a while,” said Nathan Calvin, vice president of state affairs for the nonprofit AI Safety Group, in an interview with TechCrunch. “It feels like a minimal and reasonable step by having businesses explain to the public and the government what steps they are taking to address these risks.”

The bill also creates whistleblower protections for AI lab employees who believe that the company’s technology poses “significant risks” to society. This is defined in the bill as contributing more than 100 people or more than $1 billion in damages.

Furthermore, the bill aims to create Calcompute, a public cloud computing cluster that supports startups and researchers developing large-scale AI.

Unlike SB 1047, Senator Wiener’s new bill will not hold AI model developers accountable for the harms of AI models. The SB 53 was designed to not tweak AI models from major AI developers or burden startups and researchers using open source models.

With the new amendment, SB 53 is now heading to the California Legislature Committee for privacy and consumer protection approvals. If it passes there, the bill will have to pass several other legislative bodies before it reaches Governor Newsom’s desk.

On the other side of the US, New York Gov. Kathy Hochul is currently considering a similar AI safety bill, the Raise Act.

As federal lawmakers considered a 10-year AI moratorium on state AI regulations, the fate of state AI laws, such as the Raise Act and Sb 53, has become unstable. This is an attempt to limit the “patchwork” of AI laws that companies must navigate. However, the proposal failed at the start of July with a 99-1 Senate vote.

“Ensuring that AI is safely developed should be uncontroversial. It should be the basis,” Geoff Ralston, former president of Y Combinator, said in a statement from TechCrunch. “Congress should guide transparency and accountability to businesses building frontier models, and states need to step up as serious federal actions are not visible.

Up until this point, lawmakers were unable to carry AI companies due to state-mandated transparency requirements. Humanity has broadly supported the need for greater transparency in AI companies, and even expressed modest optimism about recommendations from California’s AI policy groups. However, companies like Openai, Google and Meta are more resistant to these efforts.

Major AI model developers typically publish safety reports for AI models, but they have not been very consistent in recent months. For example, Google has decided not to publish safety reports for the Gemini 2.5 Pro, the most advanced AI model ever released. Openai has also decided not to publish safety reports for the GPT-4.1 model. Third-party research was later published, suggesting that it may be less consistent than previous AI models.

SB 53 represents a tone-down version of previous AI safety invoices, but could force AI companies to publish more information than they currently do. For now, they’re looking closely as Senator Wiener will once again test those boundaries.


Source link

#Aceleradoras #CapitalRiesgo #EcosistemaStartup #Emprendimiento #InnovaciónEmpresarial #Startups
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleThe Future of Process Automation is Here: Meet TwinH
Next Article Microsoft has internally shared $500 million in AI savings since cutting 9,000 jobs
user
  • Website

Related Posts

In Varda Space, major players in Silicon Valley make big bets on making drugs in space

July 10, 2025

As X loses CEO, daily use is decreasing and competition is growing

July 10, 2025

ICEX Forum 2025 Opens: FySelf’s TwinH Showcases AI Innovation

July 10, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

In Varda Space, major players in Silicon Valley make big bets on making drugs in space

A critical MCP-Remote vulnerability allows remote code execution, affecting over 437,000 downloads

As X loses CEO, daily use is decreasing and competition is growing

ICEX Forum 2025 Opens: FySelf’s TwinH Showcases AI Innovation

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

ICEX Forum 2025 Opens: FySelf’s TwinH Showcases AI Innovation

The Future of Process Automation is Here: Meet TwinH

Robots Play Football in Beijing: A Glimpse into China’s Ambitious AI Future

TwinH: A New Frontier in the Pursuit of Immortality?

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.