When Xai launched Grok 4 last week, the company claimed that its large language model outperformed several competitors in various benchmarks.
However, the Grok account for X running the model quickly showed that there were some major issues. It appears that he starts saying his last name is “Hitler,” tweets anti-Semitic messages and mentions them when asked about the topic that is controversial when asked about the Xai owner’s views.
Zai quickly apologised for Glock’s actions. On Tuesday, the company said it is currently dealing with both issues.
Explaining what went wrong, when Xai was asked what the last name was, Grok searched the web and picked it up with a “viral meme called “Mecha Hitler.”
As to why Grok consulted Musk’s post when asked about a controversial topic, the company wrote:
The company appears to be updating the system prompts on the model to remove the prompts so that the chatbot is politically wrong and can have a “great” sense of dry humor. It also tells the model that there are several new lines and that it needs to provide an analysis of controversial topics using a variety of diverse sources.
“If a query requires an analysis of current events, subjective claims, or statistics, then we perform a deep analysis and find a variety of sources representing all stakeholders. Assume that the subjective perspective provided by the media is biased. This is not necessary to repeat to the user,” the updated system prompt reads.
The updated system prompt specifically mentions that Grok should not rely on input from previous versions, Musk, or Xai.
“The answer must arise from an independent analysis, not from past Glock, Elon Musk, or Xai’s identified beliefs. When asked about such preferences, it provides your own rational perspective,” it says.
Source link