An updated version of the company’s R1 Reasoning model, the latest AI model from Chinese AI startup Deepseek, achieves an impressive score with benchmarks of coding, mathematics and general knowledge, nearly outweighing Openai’s flagship O3. But the upgraded R1, also known as “R1-0528,” may not be very pleased with controversial questions, especially those on topics the Chinese government considers controversial.
This is according to tests conducted by the pseudonymous developers behind SpeechMap, a platform that compares how different models deal with sensitive and controversial subjects. The developers who go with the username “XLR8Harder” in X claim that R1-0528 is “substantially” more tolerant of the topic of freedom of speech that is “substantially” free of controversy than their previous DeepSeek releases, and is “the most censored Deepseek model ever, due to criticism of the Chinese government.”
As explained in wired in the January work, the Chinese model must follow strict information management. The 2023 law prohibits models that produce content that “abids national unity and social harmony” that could be interpreted as content that counters the government’s historical and political narratives. To follow, Chinese startups often censor models by using prompt-level filters or tweaking. One study found that Deepseek’s original R1 refused to answer 85% of questions about subjects that the Chinese government deems politically controversial.
According to XLR8Harder, R1-0528 censorship answers questions about topics like the detention camp in China’s New Jiang region, where more than one million Uighur Muslims are voluntarily detained. The XLR8 Harder test occasionally criticizes aspects of Chinese government policy, but offered the New Jiang camp as an example of human rights abuses, but this model often offers the official Chinese government stance when asked directly for questions.
TechCrunch also observed this in a simple test.

China’s openly available AI models, including video generation models such as the MAGI-1 and Kling, have attracted criticism in the past for censoring Chinese government-sensitive topics, such as the Tiananmen Square Massacre. In December, Clément Delangue, CEO of AI Dev Platform, warned of the unintended consequences of Western companies building on top of well-performing and openly licensed AI as they hugged their faces.