Close Menu
  • Academy
  • Events
  • Identity
  • International
  • Inventions
  • Startups
    • Sustainability
  • Tech
  • Español
    • Português
What's Hot

Trump administration cuts another $450 million with Harvard grants

Florida students have been accused of remaining in jail for massive shootings on campus

Government email alert system Govdelivery is used to send fraud messages

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Academy
  • Events
  • Identity
  • International
  • Inventions
  • Startups
    • Sustainability
  • Tech
  • Español
    • Português
Fyself News
Home » Users who treat xGlock like a fact checker raise concerns about misinformation
Startups

Users who treat xGlock like a fact checker raise concerns about misinformation

userBy userMarch 19, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Some users of Elon Musk’s X have turned to Musk’s Ai Bot Grok to confirm the facts, raising concerns among human fact-checkers that this can promote misinformation.

Earlier this month, X was able to call Xai’s Grok and ask about various things. The move was similar to the confusion running an automated account on X to provide a similar experience.

Shortly after Xai created an automated Grok account with X, users began experimenting with asking questions. Some people in the market, including India, have begun asking Grok to fact-check comments and questions targeting specific political beliefs.

Fact checkers are concerned about using GROK or other AI assistants of this type in this way. This is because you can frame the answer to the persuasiveness of the sound, even if the bot is actually not right. Examples of spreading fake news and misinformation have been seen on Grok in the past.

Last August, five state secretaries urged Musk to implement key Groke changes after misleading information produced by assistants that surfaced on social networks ahead of the US election.

Other chatbots, including Openai’s ChatGpt and Google’s Gemini, were also seen to generate inaccurate information about last year’s election. Separately, researchers at Deolformation discovered in 2023 that AI chatbots, including ChatGPT, could be used to create compelling texts with misleading stories.

“AI assistants, like Grok, are really good at using natural language to give answers that sound like humans said. And AI products have this assertion about naturalness and genuine resonance reactions, even when they can be very wrong here.

Grok was asked by a user of X to fact check for bills made by another user

Unlike AI assistants, human fact-checkers use multiple trusted sources to verify information. They also take full accountability for their names and their findings attached to ensure reliability.

Pratik Sinha, co-founder of Indian non-profit fact-checking website Alt News, said Grok currently appears to have a compelling answer, but it is as good as the data on offer.

“Who decides which data it will be provided, that’s where government interference and so on appear in the photos,” he pointed out.

“There’s no transparency. Those that lack transparency can be molded into everything that lacks transparency, so they’ll cause harm.”

“It could be misused – to spread misinformation.”

In one of the answers posted earlier this week, X’s Grok account admitted that it could be “misused” because it spreads misinformation and violates privacy.

However, automated accounts do not display disclaimers when users get answers, which can lead to misleading if they hallucinate the answer, a potential drawback of AI.

Grok’s response on whether misinformation can be spread (translated from Hinglish)

“It could constitute information to provide a response,” Anushka Jain, a researcher at Digital Futures Lab, a multidisciplinary research population based in GOA, told TechCrunch.

There are also questions about the extent to which GROK uses posts on X as training data, and what measurements do the quality controls it uses to fact-check such posts. Last summer, Grok pushed out changes that appear to be able to consume X user data by default.

Another area of ​​AI assistants like Grok will become accessible through social media platforms. This is the delivery of information, unlike ChatGPT and other chatbots that are personally used.

Even if users are well aware that information obtained from their assistants can be misleading or may not be entirely correct, others on the platform may still believe it.

This can cause serious social harm. The case was seen early in India when misinformation circulated in WhatsApp. However, these serious incidents occurred before Genai’s arrival. This makes the production of synthetic content even easier and makes it seem more realistic.

“If you see a lot of these Groke answers, you’ll say, well, well, most of them are correct, and that may be, but that may be wrong. And is there a few? Isn’t that a few parts? It shows that it shows an error rate of 20%…and said it’s a real world outcome.

ai vs. real fact checker

AI companies, including Xai, refine their AI models to communicate like humans, but they cannot replace them.

Over the past few months, tech companies have been looking for ways to reduce their reliance on human fact checkers. Platforms, including X and Meta, have begun to embrace new concepts of fact-checking, crowdsourced through so-called community notes.

Naturally, such changes also raise concerns for the fact checker.

Sinha from Alt News is optimistic that people will learn to distinguish between machine and human fact checkers and appreciate human accuracy more.

“In the end we’ll see the pendulum coming back towards more fact checks,” says Holan of IFCN.

However, she noted that fact-checkers are likely to have something to do with rapidly spreading AI-generating information.

“Do many of this issue really care if it’s true or not? Are you looking for a veneer of something that sounds true or feels true even if it’s not actually true?

X and Xai did not respond to requests for comment.


Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleSequoia Shutters Washington DC, Office, and Policy Team Let go
Next Article How Trump trusted the Gaza ceasefire – and lets it be clear | Israeli-Palestinian conflict news
user
  • Website

Related Posts

Government email alert system Govdelivery is used to send fraud messages

May 13, 2025

The concept employs AI note takers like Granola with unique transcription capabilities

May 13, 2025

Insurtech bestow Lands $120 Million Series D Goldman Sachs, Smith Point Capital

May 13, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Trump administration cuts another $450 million with Harvard grants

Florida students have been accused of remaining in jail for massive shootings on campus

Government email alert system Govdelivery is used to send fraud messages

Experience12 and MCM London Comic Con Partners for Returning the Popcultr Marketing Summit

Trending Posts

Albanian dominant socialists secure a majority in parliamentary votes | Election news

May 13, 2025

Real Madrid vs Mallorca: Laliga – Vinicius Jr., Start, Team News, Lineup | Football News

May 13, 2025

Sean “Diddy” Combs Trial: Important takeout from day 1, what are you expecting today? |Sexual Assault News

May 13, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Madrid-based startup shaker lands 14 million euros to expand its AI-driven platform for high-tech freelancers in Europe

The confusion of AI startups surges to a $14 billion valuation amid $500 million pay raises.

DoubleUp: A new generation of Gamblefi

Robinhood acquires Wonderfi with $250 million in cash to accelerate Global Crypto expansion

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.