From the moment Openai CEO Sam Altman stepped onto the stage, it was clear that this would not be a normal interview.
Altman and his chief operating officer, Brad Wrightcap, awkwardly stood towards the back of the stage at a jam-filled San Francisco venue where jazz concerts are usually held. Hundreds of people filled the steep theatre-style seat on Tuesday night, documenting live episodes of New York Times columnist Kevin Loose and platformer Casey Newton, a popular technology podcast, Hard Fork.
Altman and LightCap were the main events, but they were leaving too early. Loose explained that he and Newton had planned to ideally list some headlines written about Open Allies in the weeks leading up to the event before the Open executives came out.
“This is even more fun for us to be here for this,” Altman said. Seconds later, Openai’s CEO asked, “I don’t like the privacy of our users, so are you going to talk about where to sue us?”
Within minutes of the program’s inception, Altman hijacked the conversation to talk about the New York Times lawsuit against Openai and its biggest investor, Microsoft. The publisher claims that Altman’s company inappropriately used the article to train a large-scale language model. Altman was particularly peeked out about the recent developments in a lawsuit in which a lawyer representing the New York Times asked Openai to maintain customer data for consumer ChatGPT and APIs.
“For a really long time, one of the great institutions, the New York Times has taken the position that even if users chat in private mode or ask them to delete it, they need to keep their logs,” says Altman. “I still love the New York Times, but that’s what we feel strongly about.”
For a few minutes, Openai CEO urged Podcasters to share their personal opinions about the New York Times lawsuit.
The rude entrance to Altman and Lightcap lasted only a few minutes, but the rest of the interview seemed to go as planned. However, I felt that the flare-up was a point of inflection in relation to the media industry.
Over the past few years, several publishers have filed lawsuits against Openai, Anthropic, Google, and Meta to train AI models on copyrighted works. These lawsuits argue that at a high level, the AI model could devalue and even replace copyrighted works produced by media institutions.
However, the tide may be in favor of tech companies. Earlier this week, Openai’s competitor, humanity, won a major victory in the legal battle with publishers. A federal judge ruled that humanity’s use of books to train AI models is legal in some circumstances, and this could have a wide range of impact on lawsuits by other publishers against Openai, Google, and Meta.
Perhaps Altman and LightCap felt encouraged by the industry’s victory in preparation for a live interview with journalists from the New York Times. However, recently Openai has dodged the threat from all directions, which has become clear all night long.
Mark Zuckerberg is recently looking to recruit top Openai talents by offering a $100 million compensation package to join Meta’s AI Superintelligence Lab.
When asked if Meta CEO really believes in close AI systems, or if it’s just a recruitment strategy, LightCap said, “I think [Zuckerberg] He believes it is close. ”
Loose then asked Altman about his relationship with Open Microsoft, which is reportedly pushed to the boiling point in recent months to negotiate a new contract. Microsoft was once the leading accelerator for Openai, but the two now compete in enterprise software and other domains.
“Every deep partnership has a point of tension, and we certainly have them,” Altman said. “We are both ambitious companies, so we find some flashpoints, and I think that’s something we find deep value for both parties for a very long time.”
Openai’s leadership today seems to be spending a lot of time mitigating competitors and litigation. It could hinder Openai’s ability to solve wider problems around AI, such as how to safely deploy highly intelligent AI systems at scale.
At one point, Newton asked Openai leaders how he thinks about the recent stories of mentally unstable people using ChatGpt, including conspiracy theories and suicides about suicides, in order to cross dangerous rabbit holes using ChatGpt.
Altman said Openai has taken many steps to prevent these conversations, including taking many steps, cutting them early and leading them to professional services that can help users.
“We don’t want to slip into the mistakes that previous generation tech companies think are made by not responding quickly enough,” says Altman. To a follow-up question, Openai’s CEO added, “However, we still don’t understand how warnings will pass to users who are in sufficient mental locations at the edge of mental illness breaks.”
Source link