From ChatGPT to compose emails to AI systems that recommend TV shows and even help diagnose diseases, the presence of machine intelligence in everyday life is no longer science fiction.
Despite the promise of speed, accuracy, and optimization, there is still some discomfort. Some people like to use AI tools. Some people may feel anxious, doubtful, or betrayed. why?
you may like
However, many AI systems operate as black boxes. Type something and a decision will appear. The logic between them is hidden. Psychologically, this is disturbing. We like to see cause and effect, and we like to be able to question decisions. When we can’t do that, we feel helpless.
This is one of the reasons for so-called algorithm aversion. It’s a term popularized by marketing researcher Berkeley Dietvorst and others, whose research shows that people often prefer flawed human judgment to algorithmic decision-making, especially if they witness even one algorithmic error.
Rationally, we know that AI systems have no emotions or intentions. But that doesn’t stop us from projecting them onto AI systems. Some users find ChatGPT’s “too polite” responses creepy. I find it annoying when the recommendation engine is a little too accurate. Even though the system has no self, we begin to suspect manipulation.
This is a form of anthropomorphism. That is, we attribute human-like intentions to non-human systems. Communication professors Clifford Nass and Byron Reeves have demonstrated that humans respond socially to machines even when they know they are not humans.
One interesting finding from behavioral science is that humans are often more tolerant of human error than machine error. When humans make mistakes, we understand it. You might even empathize. But when an algorithm makes a mistake, especially if it was touted as objective or data-driven, we feel betrayed.
This is relevant to research on expectation violations, when assumptions about how something works are broken. It causes discomfort and loss of trust. We believe machines are logical and fair. So our reactions are sharper when we make mistakes, such as misclassifying images, providing biased output, or recommending something grossly inappropriate. We expected more.
Sarcasm? Humans make wrong decisions all the time. But at least we can ask them “why?”
you may like
We don’t like AI to make wrong decisions.
For some, AI is not only unfamiliar, but also unsettling. Teachers, writers, lawyers, and designers are suddenly faced with tools that replicate parts of their work. This is not just about automation, but about what makes our skills valuable and what it means to be human.
This can lead to identity threat, a concept studied by social psychologist Claude Steele and others. It represents a fear of losing one’s expertise or uniqueness. result? Resistance, defense, or outright denial of technology. In this case, distrust is not a bug, but a psychological defense mechanism.
look for emotional cues
Human trust is built on more than logic. We read tone, facial expressions, hesitation, and eye contact. AI has none of these. It can be fluent and even charming. But they can’t make us feel safe like others can.
This is similar to the discomfort of the uncanny valley. The uncanny valley is a term coined by Japanese roboticist Masahiro Mori to describe the eerie feeling when something is close to, but not quite human. Even though it looks and sounds right, something doesn’t feel right. That absence of emotion can be interpreted as coldness or even deception.
In a world full of deepfakes and algorithmic decisions, lost emotional resonance becomes a problem. It’s not because AI is doing bad things, it’s because we don’t know how to feel about it.
Importantly, not all suspicions about AI are unreasonable. Algorithms have been shown to reflect and reinforce biases, especially in areas such as recruitment, enforcement, and credit scoring. If you’ve been harmed or disadvantaged by a data system before, you’re not paranoid, you’re cautious.
This leads to a broader psychological concept: learned distrust. When institutions and systems repeatedly fail certain groups, skepticism becomes not only rational, but protective.
Telling people to “trust the system” rarely works. You have to earn their trust. That means designing AI tools that are transparent, questionable, and accountable. It means not only convenience but also giving users independence. Psychologically, we trust that we understand, that we can ask questions, and that others respect us.
If we want AI to be accepted, it needs to feel less like a black box and more like a conversation we’re invited to participate in.
This edited article is republished from The Conversation under a Creative Commons license. Read the original article.
Source link
