When Deepseek, Alibaba and other Chinese companies released AI models, Western researchers quickly realized that they avoided questions critical of the Chinese Communist Party. US officials later confirmed that these tools were designed to reflect Beijing’s topics and raised concerns about censorship and bias.
American AI leaders like Openai have pointed out that there is justification to move their technology quickly without too much regulation or oversight. As Openai Chief Executive Chris Lehane wrote in a LinkedIn post last month, there is a contest between “US-led democratic AI and Communism-led China-led dictatorial AI.”
An executive order signed by President Donald Trump on Wednesday could ban AI models from government contracts that are “wake up” and “ideologically neutral.”
This command calls a “broad and disruptive” ideology that can invoke diversity, fairness, and inclusion (DEI) and “distort the quality and accuracy of the output.” Specifically, order refers to information on race and gender, manipulation of racial or sexual expression, critical racial theories, transgenderism, unconscious bias, intersectivity, and systematic racism.
Experts warn developers who may be under pressure to align the output and dataset of their models with White House rhetoric that they can secure federal dollars for their cash-burning business.
The order comes on the same day the White House released Trump’s “AI Action Plan.” This focuses on moving national priorities away from social risks and instead building AI infrastructure, reducing deficits for high-tech companies, reinforcing national security and competition with China.
The order, along with federal procurement policy managers, general services managers, and science and technology policy managers, direct the Directors of Management and Budget Management to issue guidance to other agencies on how to comply.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
“Never once, we’re awake,” Trump said Wednesday at an AI event hosted by the All-in-Podcast and the Hill & Valley Forum. “I am signing an order banning the federal government from procuring AI technologies infused with partisan bias and ideological agendas.
Determining fairness or objectives is one of many challenges to ordering.
Philip Seargeant, a senior lecturer in applied linguistics at Open University, told TechCrunch that nothing was objective.
“One of the fundamental doctrines of sociolinguistics is that language is never neutral,” Seargeant said. “So the idea that you can get pure objectivity is fantasy.”
Plus, the ideology of the Trump administration does not reflect the beliefs and values of all Americans. Trump has repeatedly sought to eliminate funding for climate initiatives, education, public broadcasting, research, social services grants, community and farm support programs, and gender-affirming care, and often frames these initiatives as examples of “wake up” or politically biased government spending.
Rumman Chowdhury, data scientist and CEO of Tech Nonprofit Human Intelligence, and former National Science Envoy for AI, said, “Anything.” [the Trump administration doesn’t] You will be immediately thrown into this light mountain that has woken up. ”
The definitions of “truth seeking” and “ideological neutrality” in the order published Wednesday are vague in some ways and otherwise specific. “Seeking for Truth” is defined as LLM, which “prioritizes historical accuracy, scientific research and objectivity,” while “ideological neutrality” is defined as LLM, which is a “neutral, nonpartisan tool that does not support ideological doctrines such as Dei.”
These definitions leave room for broad interpretation and potential pressure. AI companies have fewer restrictions on how they work. And while executive orders carry no legislative forces, frontier AI companies have now become subject to changing priorities on the administration’s political agenda.
Last week, Openai, Anthropic, Google and Xai entered into contracts with the Department of Defense, receiving up to $200 million each, developing an agent AI workflow that addresses critical national security challenges.
It is unclear which of these companies are in the best position to get from a woken AI ban or whether they will follow.
TechCrunch has reached out to each of them and will update this article if they receive a reply.
Despite displaying its own bias, Xai may be most in line with the order, at least in this early stage. Elon Musk has positioned the ultimate anti-awakening of Grok, the chatbot from Xai, as the “unbiased” Truthseeker. Grok’s system prompts have avoided postponements to mainstream authorities and the media, seeking paradoxical information even if it is politically wrong, and directing Musk to refer to his own views on controversial topics. Over the past few months, Grok has spewed anti-Semitic comments and praised Hitler with X.
Mark Lemley, a law professor at Stanford University, told TechCrunch that the executive order was “clearly intended as point-of-view discrimination.” [the government] He has just signed a contract with Grok, aka Mechahitler. ”
Along with Xai’s DOD funding, the company announced that Grok for Government has been added to its general service management schedule. This means that Xai products are now available for purchase at all civil servants and agents.
“The right question is this: Do you ban AI that they just signed a large contract, as they are intentionally designed to give answers that have been politically charged?” Remley said in an email interview. “If not, it’s clearly designed to discriminate against a particular perspective.”
As Grok’s own system prompt shows, the output of the model may reflect both the technology the AI is being trained and the people who build the data. In some cases, there is sufficient attention paid between developers and AI trained with value-promoting Internet content like inclusivity, causing distorted model output. For example, Google was attacked last year after Gemini Chatbot showed Black George Washington and the racially diverse Nazis.
Chowdhury says her biggest fear in the executive order is that AI companies are actively reworking their training data to drive the party’s line. She noted that Xai will use the new model and its advanced inference capabilities to “rewrite the entire corpus of human knowledge, remove missing information and errors before retraining,” and that Xai will use the new model and its advanced inference capabilities, pointing out a statement from Musk just a few weeks before the launch of the Grok 4.
This puts you in a position to determine that the mask is true on the surface.
Of course, businesses have made calls for judgment regarding the lack of information that has not been seen since the dawn of the internet.
Conservative David Sachs – an entrepreneur and investor Trump appointed to AI Czar – speaks openly about his concerns about “awakening AI” on the All-in-Podcast, which co-hosted the day of Trump’s AI Announcement. Sacks denounced the creators of prominent AI products that injected the value of the left into them, framing his arguments as a defense of free speech, and creating warnings against the trends towards centralized ideological control of digital platforms.
The problem is, according to experts, there is no truth. It is impossible to achieve fair or neutral outcomes, especially in today’s world where even facts are politicized.
“If the results generated by AI say climate science is correct, is that left bias?” Seargeant said. “Some people say that even if there is no status on one side of the argument, both sides of the argument need to be objective.”
Source link