As Sen. Ted Cruz (R-TX) and other lawmakers work to ensure inclusion in GOP Megabill ahead of the July 4 deadline, a federal proposal to ban state and local governments from regulating AI for 10 years could soon be signed into law.
Proponents, including Openai’s Sam Altman, Anduril’s Palmer Luckey, and A16Z’s Marc Andreessen, have argued that the “patchwork” of AI regulation between nations will curb American innovation when competition is escalating to beat China.
Critics include most Democrats, a few Republicans, Dario Amodei, CEO of humanity, labor groups, AI safety nonprofits, and consumer rights advocates. They warn that the provision will prevent consumers from passing laws that protect consumers from harm to AI, effectively allowing strong AI companies to operate their businesses without much surveillance or accountability.
The so-called “AI Moratorium” was narrowed down in May to a budget settlement bill called “Big Beautiful Bill.” The nation is “[enforcing] Regulation Laws or Regulations [AI] Model, [AI] Systems, or automatic decision systems for ten years.
Such measures could preempt AI laws already passed, such as California AB 2013. This requires companies to clarify the data used to train AI systems.
The reach of the moratorium is far beyond these examples. Civics have compiled a database of AI-related laws that may be affected by moratoriums. The database has revealed that many states have passed overlapping laws, allowing AI companies to easily navigate the “patchwork.” For example, Alabama, Arizona, California, Delaware, Hawaii, Indiana, Montana and Texas have criminalized or created civil liability for distributing deceptive AI-generated media to influence elections.
AI Moratorium is threatening several notable AI safety bills awaiting signatures, including New York’s salary increases laws.
Putting a moratorium on your budget bill requires creative action. Because the provisions of the budget bill must have a direct financial impact, Cruz revised its proposal in June to make AI Moratorium compliance a condition for receiving funds from the $42 billion Broadband Equity Access and Deployment (BEAD) program.
Cruise then announced another revision on Wednesday. He says the requirement will only be linked to the new $500 million bead funds included in the bill, or another additional money. However, a thorough investigation of revised texts shows that language threatens to withdraw broadband funds already excluded from states that do not comply.
Sen. Maria Cantwell (D-WA) criticized Cruz’s reconciliation language on Thursday, claiming that he would “force Beads funds to choose to expand broadband or to protect consumers from AI.”
What’s next?

The regulations are currently stagnant. Cruz’s first revision passed a procedural review earlier this week. In other words, the AI moratorium will be included in the final bill. However, reporting today from Punchbowl News and Bloomberg suggests that the talk has resumed, and that AI Moratorium’s language conversation is ongoing.
Sources familiar with the matter tell TechCrunch he hopes to launch Senate discussions this week on budget revisions, including those that attack AI moratoriums. Following that, a series of quick votes for the full slate of the revision.
Politico reported Friday that the Senate is set to make its first vote for Megaville on Saturday.
Openai’s Chief Global Affairs Officer, Chris Lehane, in a LinkedIn post, said, “The current patchwork approach to regulating AI is not working and will continue to get worse if we stay on this path.” He said this has “serious implications” for the US as they compete to establish AI control over China.
“I’m not the type of person I’d quote, but Vladimir Putin says that anyone who wins will determine the direction of the world going forward,” writes Lehan.
Openai CEO Sam Altman shared similar sentiments this week on a live recording of Tech Podcast Hard Fork. He believes that some adaptation regulations that address the biggest existential risks of AI are good, but “the US patchwork will probably be really confusing and very difficult to serve.”
Altman also questioned whether policymakers are equipped to handle AI adjustments when technology moves very quickly.
“I’m worried that if we start a three-year process to write something that covers a lot of cases… it’s a concern that technology will move very quickly,” he said.
But if you look closely at existing state laws, there’s another story. Most states’ AI laws that exist today are not far away. They focus on protecting consumers and individuals from certain harms, including deepfakes, fraud, discrimination, and privacy violations. They target AI use in contexts such as employment, housing, credit, healthcare, elections, and include disclosure requirements and algorithm bias safeguards.
TechCrunch asked Lehane and other members of the Openai team if they could name current state laws that are hampering their ability to advance technology and release new models. He also asked why navigating different state laws is considered too complicated given Openai’s advances in technology that could automate a wide range of white-collar jobs over the next few years.
TechCrunch asked similar questions from Meta, Google, Amazon and Apple, but has not received an answer.
Case for first goal

“The patchwork discussion has been heard since the start of consumer advocacy time,” Emily Peterson Kassin, corporate power director for Demand Progress, an internet activist group, told TechCrunch. “But in reality, businesses are always complying with regulations from different states. Are they the most powerful companies in the world? Yes. Yes, you can.”
Opponents and cynicians alike say that AI moratoriums are not about innovation, they are about surveillance. Many states have passed regulations around AI, but Congress is moving slowly, but there are no laws regulating AI.
“If the federal government wants to pass strong AI safety laws and preempt the state’s ability to do so, I will be the first person to be very excited about it,” said Nathan Calvin, vice president of state affairs for nonprofit encoding in an interview. “Instead, [the AI moratorium] It takes away all leverage and all the abilities to force AI companies to come to the negotiation table. ”
One of the loudest critics of the proposal is humanity CEO Dario Amody. In an opinion piece in The New York Times, Amody said, “The 10-year suspension is way too dull on instruments.”
“AI is moving forward too quickly,” he wrote. “We believe that these systems can essentially change the world within two years. In 10 years, all bets will be turned off. Without a clear plan for the federal government’s response, the suspension will give the worst of both worlds.
He argued that instead of prescribing how companies should release their products, the government should work with AI companies to develop transparency standards for how companies share information about their practices and model capabilities.
The opposition is not limited to the Democrats. Despite being created by prominent Republicans like Cruz and Rep. Jay Obernolte, there was prominent opposition to the AI moratorium from Republicans who argued that the provision trampled on the traditional support of the GOP.
These Republican critics include Sen. Josh Hawley (R-MO), who is concerned about the rights of the nation and has been stripped of the bill in collaboration with Democrats. Senator Marsha Blackburn (R-TN) also criticised the provision, arguing that the state needs to protect its citizens and the creative industry from the harm of AI. Rep. Marjorie Taylor Greene (R-GA) even said he would oppose the entire budget if the moratorium was still there.
What do Americans want?
Republicans like Cruz and Senate majority leader John Toon say they want a “light touch” approach to AI governance. Cruz also said in a statement that “all Americans deserve a voice that shapes them.”
However, a recent Pew Research study found that most Americans appear to want more regulations around AI. The survey found that about 60% of US adults and 56% of AI experts say they are more concerned that the US government will not go far enough to regulate AI than the government has gone too far. Americans are also skeptical of the industry’s efforts on responsible AI, with little confidence that the government will effectively regulate AI.
This article has been updated to reflect newer reports on the Senate timeline to vote for the bill.
Source link