California Sen. Scott Wiener introduced a new amendment to his latest bill, SB 53, on Wednesday.
If signed into law, California will be the first state to impose meaningful transparency requirements on leading AI developers, including Openai, Google, Humanity, and Xai.
Senator Wiener’s previous AI bill, SB 1047, contained similar requirements for AI model developers to publish safety reports. However, Silicon Valley fought fiercely against the bill, and was ultimately rejected by Gov. Gavin Newsom. The California governor then sought a group of AI leaders, including FEI-FEI LI, a leading researcher at Stanford University and co-founder of the World Lab, to form a policy group and set goals for the state’s AI safety efforts.
California AI Policy Group recently published its final recommendations, citing the need for “industry requirements for publishing information about industrial systems” to establish a “robust and transparent evidence environment.” Senator Wiener’s office said in a press release that the SB 53 amendment was heavily affected by the report.
“This bill continues to be an ongoing work and we look forward to working with all stakeholders in the coming weeks to narrow this proposal down to the most scientific and fair law,” Senator Winner said in the release.
SB 53 aims to balance the SB 1047 claimed that SB 1047 failed to achieve. Ideally, create meaningful transparency requirements for the largest AI developers without hampering the rapid growth of California’s AI industry.
“These are concerns that my organization and others have been talking about for a while,” said Nathan Calvin, vice president of state affairs for the nonprofit AI Safety Group, in an interview with TechCrunch. “It feels like a minimal and reasonable step by having businesses explain to the public and the government what steps they are taking to address these risks.”
The bill also creates whistleblower protections for AI lab employees who believe that the company’s technology poses “significant risks” to society. This is defined in the bill as contributing more than 100 people or more than $1 billion in damages.
Furthermore, the bill aims to create Calcompute, a public cloud computing cluster that supports startups and researchers developing large-scale AI.
Unlike SB 1047, Senator Wiener’s new bill will not hold AI model developers accountable for the harms of AI models. The SB 53 was designed to not tweak AI models from major AI developers or burden startups and researchers using open source models.
With the new amendment, SB 53 is now heading to the California Legislature Committee for privacy and consumer protection approvals. If it passes there, the bill will have to pass several other legislative bodies before it reaches Governor Newsom’s desk.
On the other side of the US, New York Gov. Kathy Hochul is currently considering a similar AI safety bill, the Raise Act.
As federal lawmakers considered a 10-year AI moratorium on state AI regulations, the fate of state AI laws, such as the Raise Act and Sb 53, has become unstable. This is an attempt to limit the “patchwork” of AI laws that companies must navigate. However, the proposal failed at the start of July with a 99-1 Senate vote.
“Ensuring that AI is safely developed should be uncontroversial. It should be the basis,” Geoff Ralston, former president of Y Combinator, said in a statement from TechCrunch. “Congress should guide transparency and accountability to businesses building frontier models, and states need to step up as serious federal actions are not visible.
Up until this point, lawmakers were unable to carry AI companies due to state-mandated transparency requirements. Humanity has broadly supported the need for greater transparency in AI companies, and even expressed modest optimism about recommendations from California’s AI policy groups. However, companies like Openai, Google and Meta are more resistant to these efforts.
Major AI model developers typically publish safety reports for AI models, but they have not been very consistent in recent months. For example, Google has decided not to publish safety reports for the Gemini 2.5 Pro, the most advanced AI model ever released. Openai has also decided not to publish safety reports for the GPT-4.1 model. Third-party research was later published, suggesting that it may be less consistent than previous AI models.
SB 53 represents a tone-down version of previous AI safety invoices, but could force AI companies to publish more information than they currently do. For now, they’re looking closely as Senator Wiener will once again test those boundaries.
Source link