Anthropic’s Super Bowl commercial, one of four ads dropped by AI Labs on Wednesday, begins with the word “BETRAYAL” boldly splashed across the screen. The camera shifts to a man eagerly seeking advice from a chatbot (obviously meant to portray ChatGPT) on how to talk to his mother.
This bot, played by a blonde woman, offers some classic advice. Let’s start by listening. Let’s try a nature walk! It then turns into an ad for a fictional (I hope!) cougar dating site called Golden Encounters. Anthropic concluded the spot by saying that AI will get ads, but its own chatbot, Claude, won’t.
Another commercial features a petite young man seeking advice on building a six-pack. After entering his height, age and weight, the bot will offer him an ad for height-increasing insoles.
The Anthropic commercial cleverly targets OpenAI users following the company’s recent announcement that ads will be added to ChatGPT’s free tier. And they caused an immediate uproar, with headlines about Anthropic “mocking,” “skewering,” and “dunking” OpenAI.
They’re funny enough that even Sam Altman admitted to laughing at them in X. But he clearly didn’t find them really funny. They inspired him to write a novella-sized rant that extended to calling his rivals “dishonest” and “authoritarians.”
In that post, Altman explains that the ad-supported tier is intended to shoulder the burden of providing free ChatGPT to many of its millions of users. ChatGPT remains the most popular chatbot by a wide margin.
However, the OpenAI CEO claimed that the ads, which implied that ChatGPT would distort conversations and insert ads (perhaps even activated for off-color products), were “disingenuous.” “We clearly do not intend to advertise in the manner that Anthropic portrays,” Altman wrote in a social media post. “We’re not stupid and we know users will reject it.”
tech crunch event
boston, massachusetts
|
June 23, 2026
In fact, OpenAI promises that ads will be segregated, labeled, and have no impact on chat. But the company also says it plans to make it more conversational, which is the central claim of Anthropic’s ads. As OpenAI explained in its blog, “We plan to test ads at the bottom of ChatGPT answers when there is a relevant sponsored product or service based on the current conversation.”
Altman then made similarly dubious claims to his rivals. “Anthropic provides expensive products to the wealthy,” he wrote. “We also feel strongly that we need to bring AI to the billions of people who can’t afford a subscription.”
However, Claude also has free chat slots, with subscription fees of $0, $17, $100, and $200. ChatGPT levels are $0, $8, $20, and $200. Some might argue that the subscription tiers are pretty comparable.
Altman also claimed in his post that “Anthropic wants to control what people do with AI.” He claimed to be blocking “companies we don’t like” like OpenAI from using Claude’s code, and said Anthropic is telling people what they can and cannot do with AI.
Indeed, Anthropic’s marketing contract has been “Responsible AI” since day one. After all, the company was founded by two former OpenAI alumni, who claimed that they became wary of AI safety while working there.
Still, both chatbot companies have usage policies and AI guardrails in place, and they also address the safety of their AI. Also, while OpenAI allows ChatGPT to be used for erotica, Anthropic does not, but OpenAI, like Anthropic, has determined that some content, particularly regarding mental health, should be blocked.
But Altman took this argument for telling humans what to do to an extreme level, accusing humans of being “authoritarian.”
“One authoritarian company cannot get us there alone, not to mention other obvious risks. This is a dark path,” he wrote.
Using “authoritarian” in a rant against a cheeky Super Bowl ad is misguided at best. This is especially callous given the current geopolitical environment where protesters are being murdered by agents of their own governments around the world. Rivals in the business have been competing in advertising since time immemorial, but clearly Anthropic has struck a nerve.
Source link
