As Sen. Ted Cruz (R-TX) and other lawmakers work to ensure inclusion in the GOP megaville, a federal proposal to prohibit state and local governments from regulating AI for five years can quickly be signed into law.
Proponents, including Sam Altman of Openai, Palmer Luckey of Anduril, and Marc Andreessen of A16Z, have argued that the “patchwork” of AI regulation between nations will curb American innovation when competition to beat China is intensifying.
Critics include most Democrats, many Republicans, humanity CEO Dario Amodei, labor groups, AI safety nonprofits, and consumer rights advocates. They warn that the provision will prevent consumers from passing laws that protect consumers from harm to AI, effectively allowing strong AI companies to operate their businesses without much surveillance or accountability.
On Friday, a group of 17 Republican governors wrote to majority majority leader John Tune, who advocated a “light-touch” approach to AI regulations, and to House Speaker Mike Johnson, who called for stripping away the so-called “AI moratorium” from the budget adjustment bill per Axios.
This provision was narrowed down in May to a bill called the “Big Beautiful Building.” Initially, the state said[enforcing] Regulation Laws or Regulations [AI] Model, [AI] Systems, or automatic decision systems for ten years.
But over the weekend, Cruz and Sen. Marsha Blackburn (R-TN), who also criticized the bill, agreed to shorten the suspension of state-based AI regulations to five years. The new language also seeks to exempt laws that address child sexual abuse materials, the online safety of children, and individual rights to their own names, likeness, voices and images. However, the amendment states that the law should not place “over or disproportionate burdens” on AI systems. Legal experts are not sure how this will affect state law.
Such measures could preempt AI laws already passed, such as California AB 2013. This requires companies to clarify the data used to train AI systems.
But the reach of moratoriums goes far beyond these examples. Civics have compiled a database of AI-related laws that may be affected by moratoriums. The database has revealed that many states have passed overlapping laws, allowing AI companies to easily navigate the “patchwork.” For example, Alabama, Arizona, California, Delaware, Hawaii, Indiana, Montana and Texas have criminalized or created civil liability for distributing deceptive AI-generated media aimed at affecting elections.
AI Moratorium is threatening several notable AI safety bills awaiting signatures, including New York’s salary increases laws.
Putting a moratorium on your budget bill requires creative action. Because the provisions of the budget bill must have a direct financial impact, Cruz revised its proposal in June to make AI Moratorium compliance a condition for receiving funds from the $42 billion Broadband Equity Access and Deployment (BEAD) program.
Cruz released another revision last week. He will only link this requirement to the new $500 million bead funds included in the bill, or another additional money. However, a thorough investigation of revised texts shows that language threatens to withdraw already mandatory broadband funds from states that do not comply.
Sen. Maria Cantwell (D-WA) previously criticised Cruz’s reconciliation language, claiming that he would “force Beads funds to choose to expand broadband or to protect consumers from AI.”
What’s next?

As of Monday, the Senate is engaged in voting for the llama. This is a series of quick votes on proposed amendments to the budget bill. The new language agreed by Cruz and Blackburn will be included in the broader revisions. This is expected to result in Republicans passing votes on the party line. The senator is also likely to vote for a Democrat-backed amendment to strip the entire section, a source familiar with the issue told TechCrunch.
Openai’s Chief Global Affairs Officer, Chris Lehane, in a LinkedIn post, said, “The current patchwork approach to regulating AI is not working and will continue to get worse if we stay on this path.” He said this has “serious implications” for the US as they compete to establish AI control over China.
“I’m not the type of person I’d quote, but Vladimir Putin says that anyone who wins will determine the direction of the world going forward,” writes Lehan.
Openai CEO Sam Altman shared similar sentiments last week during a live recording of Tech Podcast Hard Fork. He believes that some adaptation regulations that address the biggest existential risks of AI are good, but “the US patchwork will probably be really confusing and very difficult to serve.”
Altman also questioned whether policymakers are equipped to handle AI adjustments when technology moves very quickly.
“I’m worried that if we start a three-year process to write something that covers a lot of cases… it’s a concern that technology will move very quickly,” he said.
But if you look closely at existing state laws, there’s another story. Most states’ AI laws that exist today are not far away. They focus on protecting consumers and individuals from certain harms, including deepfakes, fraud, discrimination, and privacy violations. They target AI use in contexts such as employment, housing, credit, healthcare, elections, and include disclosure requirements and algorithm bias safeguards.
TechCrunch asked Lehane and other members of the Openai team if they could name current state laws that are hampering their ability to advance technology and release new models. He also asked why navigating different state laws is considered too complicated given Openai’s advances in technology that could automate a wide range of white-collar jobs over the next few years.
TechCrunch asked similar questions from Meta, Google, Amazon and Apple, but has not received an answer.
Case for first goal

“The patchwork discussion has been heard since the start of consumer advocacy time,” Emily Peterson Kassin, corporate power director for Demand Progress, an internet activist group, told TechCrunch. “But in reality, businesses are always complying with regulations from different states. Are they the most powerful companies in the world? Yes. Yes, you can.”
Opponents and cynicians alike, AI moratoriums are not about innovation, they are about side-tip surveillance. Many states have passed regulations around AI, but Congress is moving slowly, but there are no laws regulating AI.
“If the federal government wants to pass strong AI safety laws and preempt the state’s ability to do so, I will be the first person to be very excited about it,” said Nathan Calvin, vice president of state affairs for nonprofit encoding in an interview. “Instead, [the AI moratorium] It takes away all leverage and all the abilities to force AI companies to come to the negotiation table. ”
One of the loudest critics of the proposal is humanity CEO Dario Amody. In an opinion piece in The New York Times, Amody said, “The 10-year suspension is way too dull on instruments.”
“AI is moving forward too quickly,” he wrote. “We believe that these systems can essentially change the world within two years. In 10 years, all bets will be turned off. Without a clear plan for the federal government’s response, the suspension will give the worst of both worlds.
He argued that instead of prescribing how companies should release their products, the government should work with AI companies to develop transparency standards for how companies share information about their practices and model capabilities.
The opposition is not limited to the Democrats. Despite being created by prominent Republicans like Cruz and Rep. Jay Obernolte, there was prominent opposition to the AI moratorium from Republicans who argued that the provision trampled on the traditional support of the GOP.
These Republican critics include Sen. Josh Hawley (R-MO). Josh Hawley is concerned about the state’s rights and is working with Democrats to strip it from the bill. Blackburn also criticised the provision, arguing that the state needs to protect its citizens and the creative industry from the harm of AI. Rep. Marjorie Taylor Greene (R-GA) even said he would oppose the entire budget if the moratorium was still there.
What do Americans want?
Republicans like Cruz and Senate majority leader John Toon say they want a “light touch” approach to AI governance. Cruz also said in a statement that “all Americans deserve a voice that shapes them.”
However, a recent Pew Research study found that most Americans appear to want more regulations around AI. The survey found that about 60% of US adults and 56% of AI experts say they are more concerned that the US government will not go far enough to regulate AI than the government has gone too far. Americans are also skeptical of the industry’s efforts on responsible AI, with little confidence that the government will effectively regulate AI.
This article was updated June 30th to reflect amendments to the bill, new reports to vote for the Senate bill, and new Republican opposition to the AI moratorium.
Source link