The author of California’s SB 1047, the country’s most controversial AI safety bill of 2024, is back with a new AI bill that could shake up Silicon Valley.
California Sen. Scott Wiener introduced a new bill on Friday to protect employees at major AI labs, allowing them to say whether they believe the company’s AI systems could be a “significant risk” to society. The new bill, SB 53, will create a public cloud computing cluster called CalCompute, providing researchers and startups with the computing resources they need to develop AI that benefits the public.
Wiener’s last AI bill, California’s SB 1047, sparked lively debate across the country about how to handle large AI systems that could cause disasters. SB 1047 aims to prevent the possibility of a huge AI model that creates catastrophic events, such as loss of life and cyberattacks, which costs more than $500 million in damages. However, Gov. Gavin Newsom ultimately rejected the bill in September, saying SB 1047 is not the best approach.
However, the discussion about SB 1047 quickly became ugly. Some Silicon Valley leaders said SB 1047 would undermine America’s competitiveness in the global AI race, claiming that the bill was inspired by the unrealistic fear that AI systems could lead to science fiction-like apocalyptic scenarios. Meanwhile, Senator Wiener claimed that some venture capitalists were engaged in a “propaganda campaign” against his bill, while experts who claimed that SB 1047 pointed to Y Combinator’s claims that it would be misleading.
SB 53 essentially incorporates less debate parts of SB 1047, including whistleblower protection and the establishment of a chalcompute cluster, and repackages them into a new AI bill.
In particular, Wiener has not moved away from the existential AI risks of SB 53. The new bill provides special protection for whistleblowers who believe employers are creating AI systems that pose “significant risks.” The bill defines the predictable or material risk that the development, storage, or deployment of a defined foundation model will result in more than 100 people being killed or seriously injured, or damage to money or property over $1 billion.
SB 53 limits Frontier AI model developers (presumably Openai, Humanity, Xai, etc.) retaliates against employees who disclose information to the California Attorney General, federal authorities, or other employees. Under the bill, these developers will need to report to whistleblowers about certain internal processes that whistleblowers are concerned about.
For Calcompute, SB 53 establishes a group for building public cloud computing clusters. This group consists of representatives from the University of California and other public and private researchers. Create how to build Calcompute, cluster size, and recommendations for users and organizations that users and organizations can access.
Of course, that is very early in the SB 53 legislative process. The bill must be reviewed and passed by California legislative bodies. State lawmakers are sure to be waiting for Silicon Valley’s response to SB 53.
However, 2025 may be a tough year to pass AI safety bills compared to 2024. California passed 18 AI-related bills in 2024, and now it appears that the AI Doom movement has lost the ground.
Vice President JD Vance shows in the Paris AI Action Summit that America is not interested in AI safety and prioritizes AI innovation. The Calumpute cluster established by SB 53 could certainly be seen as moving forward with AI, but it is unclear how legislative efforts on existential AI risks will be made in 2025.
Source link