Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Too burnt out to travel? This new app will fake your summer vacation photos

Salesforce CEO Marc Benioff apologizes for saying San Francisco needs National Guard troops

Salesforce CEO Marc Benioff apologizes for saying San Francisco needs National Guard troops

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Silicon Valley surprises AI safety advocates
Startups

Silicon Valley surprises AI safety advocates

userBy userOctober 18, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Silicon Valley leaders, including White House AI and cryptocurrency czar David Sachs and OpenAI chief strategy officer Jason Kwon, sparked controversy online this week with comments about groups promoting AI safety. In other cases, he argued that some AI safety advocates are not as noble as they appear, acting in their own interests or the interests of billionaire puppeteers behind the scenes.

AI safety groups who spoke to TechCrunch said the allegations from Sachs and OpenAI are the latest attempt by Silicon Valley to intimidate its critics, but they are far from the first. In 2024, some venture capital firms spread rumors that California’s AI safety bill, SB 1047, would send startup founders to prison. The Brookings Institution classified this rumor as one of many “misinformation” about the bill, but Gov. Gavin Newsom ultimately vetoed it anyway.

Regardless of whether Sachs and OpenAI intended to intimidate critics, their actions were enough to scare some AI safety advocates. Many nonprofit leaders contacted by TechCrunch last week spoke on condition of anonymity to protect their groups from retaliation.

The controversy highlights Silicon Valley’s growing tension between building AI responsibly and building it as a consumer product at scale. This is a topic that my colleagues Kirsten Kolosek, Anthony Ha, and I explore on this week’s Equity Podcast. We also dive into the new AI safety law passed in California to regulate chatbots and OpenAI’s approach to erotica at ChatGPT.

On Tuesday, Sachs wrote a post on X alleging that Anthropic, which has raised concerns about AI’s ability to cause job losses, cyberattacks, and catastrophic damage to society, is simply stirring up fear in order to pass laws that benefit itself and drown out small startups in red tape. Anthropic is the only major AI institute to support California Senate Bill 53 (SB 53), which established safety reporting requirements for large AI companies and was signed into law last month.

Sachs was responding to a viral essay by Anthropic co-founder Jack Clark about concerns about AI. Clark presented this essay as a talk at the Curve AI Safety Conference in Berkeley a few weeks ago. Sitting in the audience, it certainly felt like a pure explanation of an engineer’s reservations about his product, but Sachs didn’t think so.

Anthropic is implementing a sophisticated regulatory capture strategy based on fear-mongering. The company is largely responsible for the state regulatory frenzy that is damaging the startup ecosystem. https://t.co/C5RuJbVi4P

— David Sacks (@DavidSacks) October 14, 2025

Sachs said Anthropic is implementing a “sophisticated regulatory strategy,” but it’s worth noting that a truly sophisticated strategy probably doesn’t require antagonizing the federal government. In a follow-up post about X, Sachs noted that Anthropic has “consistently positioned itself as an enemy of the Trump administration.”

tech crunch event

san francisco
|
October 27-29, 2025

Also this week, OpenAI Chief Strategy Officer Jason Kwon explained in a post on X why the company is subpoenaing AI safety nonprofits such as Encode, a nonprofit that advocates for responsible AI policies. (A subpoena is a legal order requesting documents or testimony.) Kwon said that after Elon Musk sued OpenAI over concerns that ChatGPT’s developer had strayed from its nonprofit mission, OpenAI became suspicious of multiple groups speaking out against the reorganization. Encode filed a court brief in support of Musk’s lawsuit, and other nonprofits also publicly spoke out against OpenAI’s reorganization.

There’s more to this story than this.

As everyone knows, we are actively defending a lawsuit in which Elon seeks to harm OpenAI for his own financial gain.

Encode, the organization where @_NathanCalvin serves as general counsel, was one of them… https://t.co/DiBJmEwtE4

— Jason Kwon (@jasonkwon) October 10, 2025

“This raises questions about transparency, including who is funding it and whether there was any coordination,” Kwon said.

NBC News reported this week that OpenAI sent wide-ranging subpoenas to Encode and six other nonprofit groups that criticize the company, seeking communications related to two of OpenAI’s biggest opponents, Musk and Meta CEO Mark Zuckerberg. OpenAI also engaged Encode for communications related to support for SB 53.

One prominent AI safety leader told TechCrunch that there is a growing rift between OpenAI’s government team and its research organization. Although OpenAI safety researchers frequently publish reports highlighting the risks of AI systems, OpenAI’s policy arm lobbied against SB 53, arguing that it would rather create uniform rules at the federal level.

In a post on X this week, Joshua Achiam, head of mission alignment at OpenAI, talked about the company’s subpoenas to nonprofits.

“Given the potential risks to my entire career, I would say this: This doesn’t seem great,” Achiam said.

Brendan Steinhauser, CEO of the AI ​​safety nonprofit Alliance for Secure AI (which has not been subpoenaed by OpenAI), told TechCrunch that OpenAI seems convinced that its critics are part of a conspiracy led by Musk. However, he argues that this is not the case and that many in the AI ​​safety community are highly critical of xAI’s safety practices, or lack thereof.

“On OpenAI’s side, this is intended to silence and intimidate critics and deter other nonprofits from doing the same,” Steinhauser said. “I think for Sachs, the concern is what’s next.” [the AI safety] The movement is growing and people want to hold these companies accountable. ”

White House AI senior policy adviser and former A16Z general partner Sriram Krishnan echoed the conversation in his own social media posts this week, criticizing AI safety advocates as out of touch. He urged AI safety organizations to talk to “real-world people who are using, selling, and deploying AI in their homes and organizations.”

A recent Pew survey found that about half of Americans are more concerned than excited about AI, but it’s unclear what exactly they’re worried about. Another recent study looked more closely and found that U.S. voters care more about job losses and deepfakes than about the catastrophic risks posed by AI (which is the primary focus of the AI ​​safety movement).

Addressing these safety concerns could come at the expense of the AI ​​industry’s rapid growth, a trade-off that many in Silicon Valley are concerned about. Concerns about overregulation are understandable, as AI investment underpins much of the U.S. economy.

But after years of unregulated AI advancements, the AI ​​safety movement appears to be gaining serious momentum heading into 2026. Silicon Valley’s attempts to fight back against safety-minded groups may be a sign that they’re having an effect.


Source link

#Aceleradoras #CapitalRiesgo #EcosistemaStartup #Emprendimiento #InnovaciónEmpresarial #Startups
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleSenate Republicans deepfaked Chuck Schumer, but X isn’t taking it down
Next Article Silver Fox spreads Winos 4.0 attack to Japan and Malaysia via HoldingHands RAT
user
  • Website

Related Posts

Too burnt out to travel? This new app will fake your summer vacation photos

October 18, 2025

Salesforce CEO Marc Benioff apologizes for saying San Francisco needs National Guard troops

October 18, 2025

Salesforce CEO Marc Benioff apologizes for saying San Francisco needs National Guard troops

October 18, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Too burnt out to travel? This new app will fake your summer vacation photos

Salesforce CEO Marc Benioff apologizes for saying San Francisco needs National Guard troops

Salesforce CEO Marc Benioff apologizes for saying San Francisco needs National Guard troops

WhatsApp changes terms to ban generic chatbots from platform

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Immortality is No Longer Science Fiction: TwinH’s AI Breakthrough Could Change Everything

The AI Revolution: Beyond Superintelligence – TwinH Leads the Charge in Personalized, Secure Digital Identities

Revolutionize Your Workflow: TwinH Automates Tasks Without Your Presence

FySelf’s TwinH Unlocks 6 Vertical Ecosystems: Your Smart Digital Double for Every Aspect of Life

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.