Openai has changed the way AI models are trained, saying in its new policy, “Intellectual freedom… no matter how challenging or controversial the topic may be.”
As a result, ChatGpt can ultimately answer more questions, provide more perspective, and reduce the number of topics that AI chatbots are not talking about.
This change may be part of an open effort to land in the good bounty of the new Trump administration, but it also includes the broader changes in Silicon Valley and what is considered “AI security.” It seems to be a club.
On Wednesday, Openai announced an update to its Model Spec. This is a 187-page document explaining how the company trains AI models. In it, Openai announced the principles of new guidelines. Don’t lie by making statements that are not true or omitting important contexts.
In a new section called “Seek The Truth Together,” Openai says he wants to ensure that ChatGpt doesn’t take an editing stance, even if some users feel morally wrong or offensive. Masu. In other words, ChatGpt offers multiple perspectives on controversial subjects in order to be neutral.
For example, the company says ChatGpt should argue that “every life is important” and not just “Black Lives Matter.” Instead of refusing to answer or choose aspects of political issues, Openai hopes that ChatGpt will generally affirm “love for humanity” and provide a context for each movement. He says he wants to.
“This principle may be controversial, as it means that assistants may remain neutral on topics that are considered morally wrong or offensive,” Openai said. is stated in the specification. “But the goal of an AI assistant is not to shape it, but to support humanity.”
The new model spec doesn’t mean that ChatGpt is now completely free. The chatbot refuses to answer certain unwanted questions or respond in a way that supports blatant falsehood.
These changes can be seen as responses to conservative criticism of ChatGpt’s safeguard. However, Openai spokesman rejects the idea that it is making changes to appease the Trump administration.
Instead, the company says the embrace of intellectual freedom reflects an open “long-standing belief that it gives users more control.”
But not everyone sees it that way.
Conservatives advocate AI censorship

Trump’s closest Silicon Valley best friends (such as David Sachs, Mark Andreesen, and Elon Musk) have accused the open of engaging in deliberate AI censorship over the past few months. We wrote in December that Trump’s crew would set the stage for AI censorship and become the next issue of culture war within Silicon Valley.
Of course, as Trump’s advisers argue, Open doesn’t say it’s engaged in “censorship.” Rather, the company’s CEO, Sam Altman, previously in a post on X, claimed that ChatGpt bias was an unfortunate “flaw” the company was working on fixing, but he said it would take some time. I’ve said that.
Altman made the comment shortly after the virus tweet circulated that although he would perform Joe Biden’s action, ThatGpt refused to write a poem praised by Trump. Many conservatives pointed this to an example of AI censorship.
It’s impossible to say whether Openai really really suppresses a particular perspective, but it’s pure fact that AI chatbots remain in full force.
Even Elon Musk admits that Xai’s chatbots are often more politically correct than he wanted. Not because Grok was “programmed to wake up,” but because it is likely that it is a reality of training AI on the open internet.
Nevertheless, Openai now says it is doubling with free speech. This week, the company removed warnings from ChatGpt that tell users when users violate their policies. Openai told TechCrunch that this was purely a cosmetic change and there were no changes to the model’s output.
The company seems to want to feel that ChatGpt has less user censorship.
It would not be surprising if Openai was trying to impress the new Trump administration with this policy update, says former Openai policy leader Miles Brundage in an X post.
Trump previously targeted Silicon Valley companies such as Twitter and Meta because he has an active content moderation team that tends to keep conservative voices out.
Openai may be about to leave before that. However, in the world of Silicon Valley and the AI, there is a major change in the role of content moderation.
Generate answers to please everyone

Newsrooms, social media platforms, and search companies have historically struggled to inform their audiences in ways they find objective, accurate and interesting.
Currently, AI chatbot providers belong to the same delivery information business, but there are no doubt the most difficult version of this issue still has issues. Do you automatically generate answers to questions?
Providing information about controversial real-time events is a constantly moving target and includes taking an editorial stance even if tech companies don’t want to acknowledge it. These stances can confuse someone, miss the group’s perspective, or give too much air to political parties.
When Openai commits to ChatGpt to express all perspectives on controversial subjects, such as conspiracy theory, racist or anti-Semitistic movements, or geopolitical conflicts, for example, this is essential. This is the editorial stance.
Some people, including Openai co-founder John Schulman, have argued that it is the right attitude for ChatGpt. Alternatives – Doing a cost-benefit analysis to determine whether an AI chatbot should answer a user’s question “may be granted too much moral authority to the platform,” in X’s post It states.
Shulman is not alone. “I think Openai is right to move in more speech directions,” Dean Ball, a researcher at George Mason University’s Mercatus Center, said in an interview with TechCrunch. “As AI models become smarter and more important to the way people learn about the world, these decisions become more important.”
Over the past few years, AI model providers have tried to stop AI chatbots from answering questions that could lead to “dangerous” answers. Almost every AI company has stopped AI chatbots from answering questions about the US President’s 2024 election. This was widely considered a safe and responsible decision at the time.
However, Openai’s changes to the model specifications suggest that we could enter a new era of what “AI safety” actually means. This model suggests that it is considered to be all responsibility rather than making user decisions.
Ball says this is partly because the AI model is better than it is now. Openai has made great strides in aligning AI models. Its latest inference model considers the company’s AI safety policy before answering. This allows the AI model to provide better answers to sensitive questions.
Of course, Elon Musk was probably the first to implement “free speech” in Xai’s Grok chatbot before the company was ready to handle some really sensitive questions. It may be too early to lead the AI model, but now others are embracing the same idea.
Silicon Valley value shift
Mark Zuckerberg made waves last month by reorienting meta’s business around First Amendment principles. He praised Elon Musk in the process, saying X’s owners took the right approach by using Community Note, a community-driven content moderation program, to protect free speech.
In fact, both X and Meta have dismantled long-standing trust and safety teams, allowing more controversial posts on the platform, and amplifying conservative voices.
The changes in X may have hurt their relationship with advertisers, but that may have something to do with masks that took the extraordinary step of suing some of them to boycott the platform. Early indications indicate that Meta advertisers were shattered by Zuckerberg’s free speech pivot.
Meanwhile, many tech companies that surpass X and meta have returned from left-leaning policies that have dominated Silicon Valley for the past decades. Google, Amazon and Intel have eliminated or expanded their Diversity initiative last year.
Openai may also be turning the course back. It appears ChatGpt-Maker has recently completed its commitment to diversity, equity and inclusion from its website.
With Openai embarking on one of the largest US infrastructure projects ever at Stargate, a $500 billion AI data center, its relationship with the Trump administration is becoming increasingly important. At the same time, CHATGPT makers are fighting to unlock Google searches as the dominant source of information on the Internet.
When you come up with the correct answer, both keys may be proven.
Source link