Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

At Starbase, SpaceX is doing its own firefighting.

Chinese hackers have been exploiting ArcGIS Server as a backdoor for over a year

FleetWorks raises $17 million to match truck drivers with freight faster

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » The Fixer’s Dilemma: Chris Lehane and OpenAI’s Impossible Mission
Startups

The Fixer’s Dilemma: Chris Lehane and OpenAI’s Impossible Mission

userBy userOctober 11, 2025No Comments8 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Chris Lehane is one of the best in the industry at drowning out bad news. Mr. Lehane, Al Gore’s press secretary during the Clinton era and Airbnb’s crisis manager who weathered every regulatory nightmare from here to Brussels, knows how to spin. Now, he’s been in what might be the most impossible job ever for two years. As OpenAI’s vice president of global policy, his job is to convince the world that OpenAI is serious about democratizing artificial intelligence, even as the company is behaving in the same way as other tech giants that have long claimed to be different.

I spent 20 minutes with him on stage at the Elevate conference in Toronto earlier this week. Twenty minutes was enough time to get past the talking points and into the real contradictions that are undermining OpenAI’s carefully constructed image. It wasn’t easy, and it wasn’t completely successful either. Lehane is really good at his job. He’s likable. He sounds reasonable. He acknowledges the uncertainty. He even talks about waking up at 3am wondering if this would actually benefit humanity.

But good intentions don’t mean much when your company summons detractors, drains water and electricity from economically depressed towns, or brings dead celebrities back to life to assert market dominance.

The company’s Sora problem is actually at the root of it all. The video generation tool was released last week and appears to include copyrighted material intact. It was a bold move for the company, which has already been sued by the New York Times, the Toronto Star and half of the publishing industry. It was also great from a business and marketing perspective. OpenAI CEO Sam Altman said the invite-only app has rocketed to the top of the App Store as people create digital versions of themselves. Characters such as Pikachu, Mario, and Cartman from “South Park.” There are also famous people who have passed away, like Tupac Shakur.

When asked what motivated OpenAI’s decision to release this latest version of Sora with these characters, Lehane gave the standard pitch. Sora is a “general purpose technology” like electricity or the printing press that democratizes creativity for people without talent or resources. Even though he calls himself a creative zero, he said on stage that he can now make videos.

He Danced was originally “allowed” by rights holders to opt out of having their work used to train Sora, which is not how copyrights are typically used. It then “evolved” towards an opt-in model after OpenAI realized that people really liked using copyrighted images. It’s not really a repeat. I’m trying to see how far I can escape. (By the way, the Motion Picture Association made some noise last week about legal threats, but OpenAI seems to have gotten away with a lot.)

Unsurprisingly, this situation is a frustrating reminder for publishers who accuse OpenAI of training their work without sharing the financial spoils. When I confronted Lehane about publishers being shut out of the economy, he brought up fair use. Fair use is an American legal doctrine that balances the rights of creators with the public’s access to knowledge. He called it the secret weapon of America’s technological superiority.

tech crunch event

san francisco
|
October 27-29, 2025

perhaps. But I just recently interviewed Lehane’s old boss, Al Gore, and I realized that instead of reading my article on TechCrunch, anyone can ask about it on ChatGPT. “It’s ‘repetitive,'” I said, “but it’s also displacement.”

For the first time, Lehane dropped Spiel. “We’re all going to need to understand this,” he said. “It’s really clumsy and easy to sit here on stage and say we need to come up with a new economic revenue model. But I think we will.” (In short, we’re making it up as we go.)

Then there are the infrastructure questions that no one wants to answer honestly. OpenAI already operates a data center campus in Abilene, Texas, and recently broke ground on a large data center in Lordstown, Ohio in partnership with Oracle and SoftBank. Lehane likens the accessibility of AI to the advent of electricity, saying those who had access to it last are still catching up, but OpenAI’s Stargate project appears to be targeting some similarly economically challenged regions as spots to install water- and electricity-hungry facilities.

During the roundtable, Lehane talked about gigawatts and geopolitics when asked if these communities would benefit or just foot the bill. He noted that OpenAI requires about gigawatts of energy per week. China installed 450 gigawatts plus 33 nuclear facilities last year. If democracies want democratic AI, they must compete. “The optimist in me said this would modernize our energy system,” he said, painting a picture of a re-industrialized America with a transformed power grid.

It was moving. But it didn’t answer whether people in Lordstown and Abilene would see their utility bills skyrocket while OpenAI generated videos of John F. Kennedy and the infamous tycoon (video generation is the most energy-intensive form of AI out there).

So I came up with the most unpleasant example. The day before her interview, Zelda Williams begged strangers on Instagram to stop sending her AI-generated videos of her late father, Robin Williams. “You are not making art,” she wrote. “You’re using human lives to make disgusting, over-processed hot dogs.”

When I asked how the company reconciles its mission with this type of close harm, Lehane responded by talking about processes such as responsible design, testing frameworks, and government partnerships. “There’s no playbook for this, right?”

Lehane has shown weakness at times, saying he wakes up at 3 a.m. every night worrying about democracy, geopolitics and infrastructure. “This comes with a lot of responsibility.”

Whether or not that moment was designed for the audience, I believe him at his word. In fact, I left Toronto thinking I had seen a masterclass in political messaging. For all I know, it’s Lehane threading an impossible needle while dodging questions about company decisions he doesn’t even agree with. Then Friday happened.

Nathan Calvin, an AI policy lawyer at the nonprofit advocacy group Encode AI, said that around the same time I was speaking with Lehane in Toronto, OpenAI was sending sheriff’s deputies to Lehane’s home in Washington, D.C., to serve a subpoena over dinner. They wanted his personal messages with California state legislators, college students, and former OpenAI employees.

Calvin accuses OpenAI of scare tactics surrounding California’s new AI regulation, SB 53. He said OpenAI was using its legal battle with Elon Musk as a pretext to target its critics, and suggested that Encode was secretly funded by Musk. In fact, Calvin fought OpenAI’s opposition to California’s AI safety bill, SB 53, and said he “literally laughed out loud” when he saw the company claim to have “worked to improve the bill.” In a skein of social media, he went on to specifically call Lehane a “master of the political dark arts.”

In Washington, that might be a compliment. For a company like OpenAI, whose mission is to “build AI that benefits all humanity,” this sounds like an indictment.

More importantly, even OpenAI employees are conflicted about what happens to them.

As my colleague Max reported last week, after the release of Sora 2, a number of current and former employees expressed their concerns on social media, including OpenAI researcher and Harvard professor Boaz Barak, who wrote that Sora 2 is “technically impressive, but it’s too early to praise it for avoiding the pitfalls of other social media apps and deepfakes.”

On Friday, Josh Achiam, OpenAI’s director of mission coordination, tweeted something more notable about Calvin’s accusations. Atiam prefaced his comments with “probably a risk to my entire career,” writing about OpenAI: “We cannot afford to do anything that makes us a fearful power rather than a benevolent power. We have a duty and a mission to all humanity, and the bar to pursue that duty is very high.”

That is. . . something. For OpenAI executives to publicly question whether their company is becoming a “terrible force rather than a good company” is not the same as a competitor firing a gun or a reporter asking a question. This is someone who chooses to work at OpenAI, believes in its mission, and recognizes a crisis of conscience despite the occupational risks.

It is a moment of crystallization. You could be the tech industry’s best political operative, a master at navigating impossible situations, and yet end up working for a company whose actions increasingly contradict its espoused values. That contradiction is likely to become even stronger as OpenAI races toward artificial general intelligence.

I think the real question isn’t whether Chris Lehane can sell OpenAI’s mission. What matters is whether other people (including, critically, others who work there) still believe it.


Source link

#Aceleradoras #CapitalRiesgo #EcosistemaStartup #Emprendimiento #InnovaciónEmpresarial #Startups
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleKalsi’s valuation reached $5 billion, days after rival Polymarket acquired $2 billion on the New York Stock Exchange with $8 billion in backing.
Next Article Einstein’s theory of relativity could rewrite important rules about what kinds of planets are habitable
user
  • Website

Related Posts

At Starbase, SpaceX is doing its own firefighting.

October 14, 2025

FleetWorks raises $17 million to match truck drivers with freight faster

October 14, 2025

Aquawise unveils AI-powered water quality technology at TechCrunch Disrupt 2025

October 14, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

At Starbase, SpaceX is doing its own firefighting.

Chinese hackers have been exploiting ArcGIS Server as a backdoor for over a year

FleetWorks raises $17 million to match truck drivers with freight faster

Aquawise unveils AI-powered water quality technology at TechCrunch Disrupt 2025

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Revolutionize Your Workflow: TwinH Automates Tasks Without Your Presence

FySelf’s TwinH Unlocks 6 Vertical Ecosystems: Your Smart Digital Double for Every Aspect of Life

Beyond the Algorithm: How FySelf’s TwinH and Reinforcement Learning are Reshaping Future Education

Meet Your Digital Double: FySelf Unveils TwinH, the Future of Personalized Online Identity

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.