Some users of Elon Musk’s X have turned to Musk’s Ai Bot Grok to confirm the facts, raising concerns among human fact-checkers that this can promote misinformation.
Earlier this month, X was able to call Xai’s Grok and ask about various things. The move was similar to the confusion running an automated account on X to provide a similar experience.
Shortly after Xai created an automated Grok account with X, users began experimenting with asking questions. Some people in the market, including India, have begun asking Grok to fact-check comments and questions targeting specific political beliefs.
Fact checkers are concerned about using GROK or other AI assistants of this type in this way. This is because you can frame the answer to the persuasiveness of the sound, even if the bot is actually not right. Examples of spreading fake news and misinformation have been seen on Grok in the past.
Last August, five state secretaries urged Musk to implement key Groke changes after misleading information produced by assistants that surfaced on social networks ahead of the US election.
Other chatbots, including Openai’s ChatGpt and Google’s Gemini, were also seen to generate inaccurate information about last year’s election. Separately, researchers at Deolformation discovered in 2023 that AI chatbots, including ChatGPT, could be used to create compelling texts with misleading stories.
“AI assistants, like Grok, are really good at using natural language to give answers that sound like humans said. And AI products have this assertion about naturalness and genuine resonance reactions, even when they can be very wrong here.

Unlike AI assistants, human fact-checkers use multiple trusted sources to verify information. They also take full accountability for their names and their findings attached to ensure reliability.
Pratik Sinha, co-founder of Indian non-profit fact-checking website Alt News, said Grok currently appears to have a compelling answer, but it is as good as the data on offer.
“Who decides which data it will be provided, that’s where government interference and so on appear in the photos,” he pointed out.
“There’s no transparency. Those that lack transparency can be molded into everything that lacks transparency, so they’ll cause harm.”
“It could be misused – to spread misinformation.”
In one of the answers posted earlier this week, X’s Grok account admitted that it could be “misused” because it spreads misinformation and violates privacy.
However, automated accounts do not display disclaimers when users get answers, which can lead to misleading if they hallucinate the answer, a potential drawback of AI.

“It could constitute information to provide a response,” Anushka Jain, a researcher at Digital Futures Lab, a multidisciplinary research population based in GOA, told TechCrunch.
There are also questions about the extent to which GROK uses posts on X as training data, and what measurements do the quality controls it uses to fact-check such posts. Last summer, Grok pushed out changes that appear to be able to consume X user data by default.
Another area of AI assistants like Grok will become accessible through social media platforms. This is the delivery of information, unlike ChatGPT and other chatbots that are personally used.
Even if users are well aware that information obtained from their assistants can be misleading or may not be entirely correct, others on the platform may still believe it.
This can cause serious social harm. The case was seen early in India when misinformation circulated in WhatsApp. However, these serious incidents occurred before Genai’s arrival. This makes the production of synthetic content even easier and makes it seem more realistic.
“If you see a lot of these Groke answers, you’ll say, well, well, most of them are correct, and that may be, but that may be wrong. And is there a few? Isn’t that a few parts? It shows that it shows an error rate of 20%…and said it’s a real world outcome.
ai vs. real fact checker
AI companies, including Xai, refine their AI models to communicate like humans, but they cannot replace them.
Over the past few months, tech companies have been looking for ways to reduce their reliance on human fact checkers. Platforms, including X and Meta, have begun to embrace new concepts of fact-checking, crowdsourced through so-called community notes.
Naturally, such changes also raise concerns for the fact checker.
Sinha from Alt News is optimistic that people will learn to distinguish between machine and human fact checkers and appreciate human accuracy more.
“In the end we’ll see the pendulum coming back towards more fact checks,” says Holan of IFCN.
However, she noted that fact-checkers are likely to have something to do with rapidly spreading AI-generating information.
“Do many of this issue really care if it’s true or not? Are you looking for a veneer of something that sounds true or feels true even if it’s not actually true?
X and Xai did not respond to requests for comment.
Source link