Close Menu
  • Academy
  • Events
  • Identity
  • International
  • Inventions
  • Startups
    • Sustainability
  • Tech
  • Español
    • Português
What's Hot

Deepfake defense in the age of AI

INE Security Alerts: Top 5 Takeouts for RSAC 2025

The EU and Japan strengthen their technology and digital partnerships

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Academy
  • Events
  • Identity
  • International
  • Inventions
  • Startups
    • Sustainability
  • Tech
  • Español
    • Português
Fyself News
Home » Users who treat xGlock like a fact checker raise concerns about misinformation
Startups

Users who treat xGlock like a fact checker raise concerns about misinformation

userBy userMarch 19, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Some users of Elon Musk’s X have turned to Musk’s Ai Bot Grok to confirm the facts, raising concerns among human fact-checkers that this can promote misinformation.

Earlier this month, X was able to call Xai’s Grok and ask about various things. The move was similar to the confusion running an automated account on X to provide a similar experience.

Shortly after Xai created an automated Grok account with X, users began experimenting with asking questions. Some people in the market, including India, have begun asking Grok to fact-check comments and questions targeting specific political beliefs.

Fact checkers are concerned about using GROK or other AI assistants of this type in this way. This is because you can frame the answer to the persuasiveness of the sound, even if the bot is actually not right. Examples of spreading fake news and misinformation have been seen on Grok in the past.

Last August, five state secretaries urged Musk to implement key Groke changes after misleading information produced by assistants that surfaced on social networks ahead of the US election.

Other chatbots, including Openai’s ChatGpt and Google’s Gemini, were also seen to generate inaccurate information about last year’s election. Separately, researchers at Deolformation discovered in 2023 that AI chatbots, including ChatGPT, could be used to create compelling texts with misleading stories.

“AI assistants, like Grok, are really good at using natural language to give answers that sound like humans said. And AI products have this assertion about naturalness and genuine resonance reactions, even when they can be very wrong here.

Grok was asked by a user of X to fact check for bills made by another user

Unlike AI assistants, human fact-checkers use multiple trusted sources to verify information. They also take full accountability for their names and their findings attached to ensure reliability.

Pratik Sinha, co-founder of Indian non-profit fact-checking website Alt News, said Grok currently appears to have a compelling answer, but it is as good as the data on offer.

“Who decides which data it will be provided, that’s where government interference and so on appear in the photos,” he pointed out.

“There’s no transparency. Those that lack transparency can be molded into everything that lacks transparency, so they’ll cause harm.”

“It could be misused – to spread misinformation.”

In one of the answers posted earlier this week, X’s Grok account admitted that it could be “misused” because it spreads misinformation and violates privacy.

However, automated accounts do not display disclaimers when users get answers, which can lead to misleading if they hallucinate the answer, a potential drawback of AI.

Grok’s response on whether misinformation can be spread (translated from Hinglish)

“It could constitute information to provide a response,” Anushka Jain, a researcher at Digital Futures Lab, a multidisciplinary research population based in GOA, told TechCrunch.

There are also questions about the extent to which GROK uses posts on X as training data, and what measurements do the quality controls it uses to fact-check such posts. Last summer, Grok pushed out changes that appear to be able to consume X user data by default.

Another area of ​​AI assistants like Grok will become accessible through social media platforms. This is the delivery of information, unlike ChatGPT and other chatbots that are personally used.

Even if users are well aware that information obtained from their assistants can be misleading or may not be entirely correct, others on the platform may still believe it.

This can cause serious social harm. The case was seen early in India when misinformation circulated in WhatsApp. However, these serious incidents occurred before Genai’s arrival. This makes the production of synthetic content even easier and makes it seem more realistic.

“If you see a lot of these Groke answers, you’ll say, well, well, most of them are correct, and that may be, but that may be wrong. And is there a few? Isn’t that a few parts? It shows that it shows an error rate of 20%…and said it’s a real world outcome.

ai vs. real fact checker

AI companies, including Xai, refine their AI models to communicate like humans, but they cannot replace them.

Over the past few months, tech companies have been looking for ways to reduce their reliance on human fact checkers. Platforms, including X and Meta, have begun to embrace new concepts of fact-checking, crowdsourced through so-called community notes.

Naturally, such changes also raise concerns for the fact checker.

Sinha from Alt News is optimistic that people will learn to distinguish between machine and human fact checkers and appreciate human accuracy more.

“In the end we’ll see the pendulum coming back towards more fact checks,” says Holan of IFCN.

However, she noted that fact-checkers are likely to have something to do with rapidly spreading AI-generating information.

“Do many of this issue really care if it’s true or not? Are you looking for a veneer of something that sounds true or feels true even if it’s not actually true?

X and Xai did not respond to requests for comment.


Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleSequoia Shutters Washington DC, Office, and Policy Team Let go
Next Article How Trump trusted the Gaza ceasefire – and lets it be clear | Israeli-Palestinian conflict news
user
  • Website

Related Posts

Slate Auto exceeds 100,000 refundable bookings in 2 weeks

May 12, 2025

Google I/O 2025: How All AI and Android Reveal

May 12, 2025

Even the A16Z VC says no one really knows what an AI agent is

May 12, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Deepfake defense in the age of AI

INE Security Alerts: Top 5 Takeouts for RSAC 2025

The EU and Japan strengthen their technology and digital partnerships

Okaran, the Kurdish leader, told the PKK to break up, it is: Kurdish News

Trending Posts

Okaran, the Kurdish leader, told the PKK to break up, it is: Kurdish News

May 13, 2025

Trump offers to participate in direct peace talks between Russia and Ukraine in Istanbul | News of the Russian-Ukraine War

May 13, 2025

Trump in the Middle East: How much does the US Gulf invest? | Donald Trump News

May 13, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

INE Security Alerts: Top 5 Takeouts for RSAC 2025

Canelo Valles joins 1 win as global ambassador after historic title victory

Google launches the AI ​​Futures Fund and invests in the next wave of AI startups

AB DAO and AB Charity Foundation work together to build trustworthy infrastructure and promote global philanthropy

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.