Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

China-linked DKnife AitM framework, routers targeted for traffic hijacking and malware distribution

Backlash over OpenAI’s decision to deprecate GPT-4o shows how dangerous AI companions can be

CISA orders removal of unsupported edge devices to reduce risk to federal networks

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » When an AI algorithm is labeled as “female-friendly”, people are more likely to abuse it
Science

When an AI algorithm is labeled as “female-friendly”, people are more likely to abuse it

userBy userDecember 3, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

People are more likely to exploit female AI partners than male AI partners, indicating that gender-based discrimination has implications beyond human interactions.

A recent study published November 2 in the journal iScience investigated how people’s willingness to cooperate changes when their human or AI partner is not labeled as female, nonbinary, male, or gender.

The researchers asked participants to play a well-known thought experiment called the prisoner’s dilemma, a game in which two players choose to cooperate with each other or work independently. If they work together, they will get the best outcome for both parties.

you may like

However, if one chooses to cooperate and the other does not, the player who did not cooperate will have a higher score, giving one side an incentive to “exploit” the other. If both players choose not to cooperate, both players’ scores will be lower.

Studies have shown that people are about 10% more likely to abuse an AI partner than a human partner. They also found that participants were more likely to collaborate with female, non-binary, or non-gendered partners than with male partners because they expected the other players to cooperate as well.

The study found that people were less likely to cooperate with their male partners because they did not trust them to choose to cooperate. In particular, female participants were more likely to cooperate with other “female” agents than agents who were perceived as male, an effect known as “homosexuality”.

“The biases observed in human-AI agent interactions are likely to influence the design of AI agents, for example to maximize people’s engagement and build trust in interactions with automated systems,” the researchers wrote in their study. “Designers of these systems need to be aware of unwanted biases in human interaction and actively work to mitigate them in the design of conversational AI agents.”

Get the world’s most fascinating discoveries delivered straight to your inbox.

Risks of anthropomorphizing AI agents

If participants don’t cooperate, it’s for one of two reasons. First, they expected other players not to cooperate and did not want a lower score. A second possibility is that you thought your opponent would cooperate, so you thought going solo would reduce the risk of lower scores at the expense of other players. The researchers defined this second option as an exploit.

Participants were more likely to “exploit” their partner if they were labeled as female, non-binary, or no-gender than if they were labeled as male. If your partner is an AI, the potential for exploitation increases. Men were more likely to exploit their partners than the AI, and more likely to cooperate with their human partners. Women were more likely to cooperate than men and did not differentiate between human and AI partners.

This study did not have enough participants who identified as a gender other than female or male to draw conclusions about how other genders interact with gendered human and AI partners.

you may like

According to the study, AI tools are increasingly being anthropomorphized (given human-like characteristics such as gender and names) to encourage people to trust and engage with them.

But anthropomorphizing AI without considering how gender-based discrimination affects people’s interactions can reinforce existing biases and exacerbate discrimination.

Many of today’s AI systems are online chatbots, but in the near future people may routinely share the road with self-driving cars or let AI manage their work schedules. This means we may have to work with AI in the same ways we are currently expected to work with other humans, making awareness of gender bias in AI even more important.

“Although exhibiting discriminatory attitudes toward gendered AI agents may not be a major ethical challenge in itself, it can foster harmful practices and exacerbate gender-based discrimination that exists within our society,” the researchers added.

“By understanding the underlying patterns of bias and user perceptions, designers can work towards creating effective and trustworthy AI systems that can meet the needs of users while promoting and maintaining positive social values ​​such as fairness and justice.”


Source link

#Biotechnology #ClimateScience #Health #Science #ScientificAdvances #ScientificResearch
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleDiscover the AI ​​tools that will fuel the next cybercrime wave — watch the webinar
Next Article India lifts obligation to pre-install government apps on smartphones after intense backlash
user
  • Website

Related Posts

7,500-year-old deer headdress discovered in Germany shows hunter-gatherers shared sacred objects and ideas with the region’s first farmers

February 5, 2026

How well can AI and humans work together? Scientists are looking to Dungeons & Dragons to find out.

February 5, 2026

Saltwater crocodiles crossed the Indian Ocean to the Seychelles before humans arrived and made them extinct.

February 5, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

China-linked DKnife AitM framework, routers targeted for traffic hijacking and malware distribution

Backlash over OpenAI’s decision to deprecate GPT-4o shows how dangerous AI companions can be

CISA orders removal of unsupported edge devices to reduce risk to federal networks

Combat antimicrobial resistance with McGill rapid tests

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.