Silicon Valley leaders, including White House AI and cryptocurrency czar David Sachs and OpenAI chief strategy officer Jason Kwon, sparked controversy online this week with comments about groups promoting AI safety. In other cases, he argued that some AI safety advocates are not as noble as they appear, acting in their own interests or the interests of billionaire puppeteers behind the scenes.
AI safety groups who spoke to TechCrunch said the allegations from Sachs and OpenAI are the latest attempt by Silicon Valley to intimidate its critics, but they are far from the first. In 2024, some venture capital firms spread rumors that California’s AI safety bill, SB 1047, would send startup founders to prison. The Brookings Institution classified this rumor as one of many “misinformation” about the bill, but Gov. Gavin Newsom ultimately vetoed it anyway.
Regardless of whether Sachs and OpenAI intended to intimidate critics, their actions were enough to scare some AI safety advocates. Many nonprofit leaders contacted by TechCrunch last week spoke on condition of anonymity to protect their groups from retaliation.
The controversy highlights Silicon Valley’s growing tension between building AI responsibly and building it as a consumer product at scale. This is a topic that my colleagues Kirsten Kolosek, Anthony Ha, and I explore on this week’s Equity Podcast. We also dive into the new AI safety law passed in California to regulate chatbots and OpenAI’s approach to erotica at ChatGPT.
On Tuesday, Sachs wrote a post on X alleging that Anthropic, which has raised concerns about AI’s ability to cause job losses, cyberattacks, and catastrophic damage to society, is simply stirring up fear in order to pass laws that benefit itself and drown out small startups in red tape. Anthropic is the only major AI institute to support California Senate Bill 53 (SB 53), which established safety reporting requirements for large AI companies and was signed into law last month.
Sachs was responding to a viral essay by Anthropic co-founder Jack Clark about concerns about AI. Clark presented this essay as a talk at the Curve AI Safety Conference in Berkeley a few weeks ago. Sitting in the audience, it certainly felt like a pure explanation of an engineer’s reservations about his product, but Sachs didn’t think so.
Sachs said Anthropic is implementing a “sophisticated regulatory strategy,” but it’s worth noting that a truly sophisticated strategy probably doesn’t require antagonizing the federal government. In a follow-up post about X, Sachs noted that Anthropic has “consistently positioned itself as an enemy of the Trump administration.”
tech crunch event
san francisco
|
October 27-29, 2025
Also this week, OpenAI Chief Strategy Officer Jason Kwon explained in a post on X why the company is subpoenaing AI safety nonprofits such as Encode, a nonprofit that advocates for responsible AI policies. (A subpoena is a legal order requesting documents or testimony.) Kwon said that after Elon Musk sued OpenAI over concerns that ChatGPT’s developer had strayed from its nonprofit mission, OpenAI became suspicious of multiple groups speaking out against the reorganization. Encode filed a court brief in support of Musk’s lawsuit, and other nonprofits also publicly spoke out against OpenAI’s reorganization.
“This raises questions about transparency, including who is funding it and whether there was any coordination,” Kwon said.
NBC News reported this week that OpenAI sent wide-ranging subpoenas to Encode and six other nonprofit groups that criticize the company, seeking communications related to two of OpenAI’s biggest opponents, Musk and Meta CEO Mark Zuckerberg. OpenAI also engaged Encode for communications related to support for SB 53.
One prominent AI safety leader told TechCrunch that there is a growing rift between OpenAI’s government team and its research organization. Although OpenAI safety researchers frequently publish reports highlighting the risks of AI systems, OpenAI’s policy arm lobbied against SB 53, arguing that it would rather create uniform rules at the federal level.
In a post on X this week, Joshua Achiam, head of mission alignment at OpenAI, talked about the company’s subpoenas to nonprofits.
“Given the potential risks to my entire career, I would say this: This doesn’t seem great,” Achiam said.
Brendan Steinhauser, CEO of the AI safety nonprofit Alliance for Secure AI (which has not been subpoenaed by OpenAI), told TechCrunch that OpenAI seems convinced that its critics are part of a conspiracy led by Musk. However, he argues that this is not the case and that many in the AI safety community are highly critical of xAI’s safety practices, or lack thereof.
“On OpenAI’s side, this is intended to silence and intimidate critics and deter other nonprofits from doing the same,” Steinhauser said. “I think for Sachs, the concern is what’s next.” [the AI safety] The movement is growing and people want to hold these companies accountable. ”
White House AI senior policy adviser and former A16Z general partner Sriram Krishnan echoed the conversation in his own social media posts this week, criticizing AI safety advocates as out of touch. He urged AI safety organizations to talk to “real-world people who are using, selling, and deploying AI in their homes and organizations.”
A recent Pew survey found that about half of Americans are more concerned than excited about AI, but it’s unclear what exactly they’re worried about. Another recent study looked more closely and found that U.S. voters care more about job losses and deepfakes than about the catastrophic risks posed by AI (which is the primary focus of the AI safety movement).
Addressing these safety concerns could come at the expense of the AI industry’s rapid growth, a trade-off that many in Silicon Valley are concerned about. Concerns about overregulation are understandable, as AI investment underpins much of the U.S. economy.
But after years of unregulated AI advancements, the AI safety movement appears to be gaining serious momentum heading into 2026. Silicon Valley’s attempts to fight back against safety-minded groups may be a sign that they’re having an effect.
Source link