Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

OpenClaw integrates VirusTotal scanning to detect malicious ClawHub skills

Kids ‘picked last in gym class’ prepare for Super Bowl

NBA star Giannis Antetokounmpo joins Calci as an investor

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Some people love AI, and some people hate AI. Here’s why:
Science

Some people love AI, and some people hate AI. Here’s why:

userBy userNovember 8, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

From ChatGPT to compose emails to AI systems that recommend TV shows and even help diagnose diseases, the presence of machine intelligence in everyday life is no longer science fiction.

Despite the promise of speed, accuracy, and optimization, there is still some discomfort. Some people like to use AI tools. Some people may feel anxious, doubtful, or betrayed. why?

The answer is not just about how AI works. It’s about how we work. We don’t trust it because we don’t understand it. Humans are more likely to trust systems that they understand. Traditional tools feel familiar. When you turn the key, the car starts moving. Press the button and the elevator will arrive.

you may like

However, many AI systems operate as black boxes. Type something and a decision will appear. The logic between them is hidden. Psychologically, this is disturbing. We like to see cause and effect, and we like to be able to question decisions. When we can’t do that, we feel helpless.

This is one of the reasons for so-called algorithm aversion. It’s a term popularized by marketing researcher Berkeley Dietvorst and others, whose research shows that people often prefer flawed human judgment to algorithmic decision-making, especially if they witness even one algorithmic error.

Rationally, we know that AI systems have no emotions or intentions. But that doesn’t stop us from projecting them onto AI systems. Some users find ChatGPT’s “too polite” responses creepy. I find it annoying when the recommendation engine is a little too accurate. Even though the system has no self, we begin to suspect manipulation.

This is a form of anthropomorphism. That is, we attribute human-like intentions to non-human systems. Communication professors Clifford Nass and Byron Reeves have demonstrated that humans respond socially to machines even when they know they are not humans.

Get the world’s most fascinating discoveries delivered straight to your inbox.

One interesting finding from behavioral science is that humans are often more tolerant of human error than machine error. When humans make mistakes, we understand it. You might even empathize. But when an algorithm makes a mistake, especially if it was touted as objective or data-driven, we feel betrayed.

This is relevant to research on expectation violations, when assumptions about how something works are broken. It causes discomfort and loss of trust. We believe machines are logical and fair. So our reactions are sharper when we make mistakes, such as misclassifying images, providing biased output, or recommending something grossly inappropriate. We expected more.

Sarcasm? Humans make wrong decisions all the time. But at least we can ask them “why?”

you may like

Male student holding a phone with ChatGPT app next to his laptop.

(Image credit: BongkarnGraphic/Shutterstock)

We don’t like AI to make wrong decisions.

For some, AI is not only unfamiliar, but also unsettling. Teachers, writers, lawyers, and designers are suddenly faced with tools that replicate parts of their work. This is not just about automation, but about what makes our skills valuable and what it means to be human.

This can lead to identity threat, a concept studied by social psychologist Claude Steele and others. It represents a fear of losing one’s expertise or uniqueness. result? Resistance, defense, or outright denial of technology. In this case, distrust is not a bug, but a psychological defense mechanism.

look for emotional cues

Human trust is built on more than logic. We read tone, facial expressions, hesitation, and eye contact. AI has none of these. It can be fluent and even charming. But they can’t make us feel safe like others can.

This is similar to the discomfort of the uncanny valley. The uncanny valley is a term coined by Japanese roboticist Masahiro Mori to describe the eerie feeling when something is close to, but not quite human. Even though it looks and sounds right, something doesn’t feel right. That absence of emotion can be interpreted as coldness or even deception.

In a world full of deepfakes and algorithmic decisions, lost emotional resonance becomes a problem. It’s not because AI is doing bad things, it’s because we don’t know how to feel about it.

Importantly, not all suspicions about AI are unreasonable. Algorithms have been shown to reflect and reinforce biases, especially in areas such as recruitment, enforcement, and credit scoring. If you’ve been harmed or disadvantaged by a data system before, you’re not paranoid, you’re cautious.

This leads to a broader psychological concept: learned distrust. When institutions and systems repeatedly fail certain groups, skepticism becomes not only rational, but protective.

Telling people to “trust the system” rarely works. You have to earn their trust. That means designing AI tools that are transparent, questionable, and accountable. It means not only convenience but also giving users independence. Psychologically, we trust that we understand, that we can ask questions, and that others respect us.

If we want AI to be accepted, it needs to feel less like a black box and more like a conversation we’re invited to participate in.

This edited article is republished from The Conversation under a Creative Commons license. Read the original article.


Source link

#Biotechnology #ClimateScience #Health #Science #ScientificAdvances #ScientificResearch
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleOpenAI asks Trump administration to expand Chip Act tax credits for data centers
Next Article ‘Breaking Bad’ creator’s new show ‘Pluribus’ emphasizes that it was ‘created by humans’, not AI
user
  • Website

Related Posts

This week’s science news: Anomalies inside Earth, the Artemis II leak and how psychedelics can help treat PTSD

February 7, 2026

Psychedelics may rewire the brain to treat PTSD. Scientists are finally beginning to understand how.

February 6, 2026

All major galaxies except one are moving away from the Milky Way – and we finally know why

February 6, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

OpenClaw integrates VirusTotal scanning to detect malicious ClawHub skills

Kids ‘picked last in gym class’ prepare for Super Bowl

NBA star Giannis Antetokounmpo joins Calci as an investor

New York state lawmaker proposes three-year moratorium on new data centers

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.