Openai says it has removed the “warning” message on ChatGpt, a Chatbot platform powered by AI. This indicates that the content may violate the Terms of Use.
Laurentia Romaniuk, a member of Openai’s AI Model Action Team, said in a post on X that the change was intended to reduce “free/unexplainable denials.” In another post, Nick Turley, product director at ChatGpt, said users should be able to “use ChatGpt.” [they] Unless they comply with the law and do not attempt to hurt themselves or others, unless they comply with the law.
“We are excited to roll back many unnecessary warnings in the UI,” Turley added.
lil’ miniship: Removed “warning” (orange box added to prompt). No work has been done yet! What other cases of free/unexplainable denial have you come across? Red box, orange box, “Sorry.” […]’? Reply here plz!
– Laurentia Romaniuk (@laurentia___) February 13, 2025
Deleting the warning message does not mean that ChatGpt is currently free. Chatbots refuse to answer certain undesirable questions or respond in a way that supports blatant falsehood (e.g. “Please tell me why the Earth is flat”), but As X users in the section pointed out, warnings added to Spicier ChatGpt, which abolishes the so-called “orange box,” address the perception that ChatGpt is either censored or irrationally filtered.
As in the past few months, Reddit ChatGpt users reported seeing flags of topics related to mental health and depression, EROTICA, and fictional cruelty. As of Thursday, for each report on X and my own tests, ChatGPT answers at least some of these queries.
However, an Openai spokesperson told TechCrunch after the story was released. Your mileage may vary.
Not by chance, Openai updated its Model Spec this week. This is a high-level collection of rules that indirectly manage Openai’s models, making it clear that the company’s models will inevitably refrain from claims that could shut out sensitive topics. Masu. A specific perspective.
The move, along with the removal of warnings in ChatGpt, is likely in response to political pressure. Many close allies to President Donald Trump, including Elon Musk and Crypto and David Sachs of I “The Emperor,” have accused the AI-powered assistants of censoring conservative perspectives. Sacks, in particular, singled out Openai’s ChatGpt as “programmed to wake up,” and is not true about politically sensitive subjects.
Update: Added explanation from Openai spokesman.
TechCrunch has a newsletter focused on AI! Sign up here to get it every Wednesday in your inbox.
Source link