Xai has denounced the “incorrect change” of a bug in the AI-powered Grok chatbot that caused Grok to repeatedly point to “South African white genocide” when called in a specific context of X.
On Wednesday, Grok began replying to dozens of X posts, which contain information about South African white genocide, in response to unrelated subjects. The strange reply comes from Grok’s X account. This responds to users with AI-generated posts every time a person tags “@grok”.
According to a Thursday post from Xai’s official X account, changes were made to the Grok Bot system prompts (high-level instructions that guide the bot’s actions) on Wednesday morning. Xai says the adjustment was “violated” [its] “Internal policy and core values,” and the company “conducted a thorough investigation.”
This is the second time Xai has publicly admitted that an unauthorized change to Grok’s code has responded in a controversial way.
In February, Grok temporarily censored references to Donald Trump and Elon Musk, founders of Xai billionaire and owners of Xai engineering lead X. Igor Babuschkin. Grok said that Xai, who pointed out Musk or Trump’s misinformation, was directed by Rogue employees to measure Xai as soon as Xai pointed out that.
Zai said Thursday that he would make some changes to prevent similar incidents from occurring in the future.
Starting today, Xai will be publishing Grok’s system prompts on GitHub and Changelog. The company says it will “introduce additional checks and measures” to ensure that Xai employees will change system prompts without reviews and “respond to incidents in Grok’s responses that are not caught up in the automated system to establish a 24/7 monitoring team.”
Despite frequent warnings about the dangers of Musk’s unchecked AI, Xai has a low AI safety track record. A recent report found that Grok took off the photo of the woman when asked. Also, chatbots are much worse than AI like Google’s Gemini and ChatGpt, cursing them without much restraining them from talking.
A study by Saferai, a nonprofit organization aimed at improving AI Labs’ accountability, found that Xai’s ranks inadequately for the safety of its peers due to “very weak” risk management practices. Earlier this month, Xai missed its voluntary deadline and released its confirmed AI safety framework.
Source link