Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Researchers discover 341 malicious ClawHub skills that steal data from OpenClaw users

OpenClaw bug allows one-click remote code execution via malicious link

Microsoft begins phasing out NTLM with three-phase plan to migrate Windows to Kerberos

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » ‘The problem isn’t just Siri and Alexa’: AI assistants tend to be feminine, perpetuating harmful gender stereotypes
Science

‘The problem isn’t just Siri and Alexa’: AI assistants tend to be feminine, perpetuating harmful gender stereotypes

userBy userJanuary 31, 2026No Comments5 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

In 2024, there will be more than 8 billion artificial intelligence (AI) voice assistants worldwide, or more than one per person on the planet. These assistants are kind and courteous, and are almost always women by default.

Their names also have gender connotations. For example, Apple’s Siri (a Scandinavian female name) means “a beautiful woman who leads to victory.”

Meanwhile, men’s input was included when IBM’s Watson for Oncology, which helps doctors process medical data, was launched in 2015. The message is clear. Women serve and men lead.

you may like

This is not a harmless branding, but a design choice that reinforces existing stereotypes about the roles women and men play in society.

This is not just symbolic either. These choices have real-world consequences, normalizing gender subordination and risking abuse.

The dark side of “friendly” AI

Recent research has revealed a range of harmful interactions with feminized AI.

A 2025 study found that up to 50% of human-machine interactions are verbally abusive.

Get the world’s most fascinating discoveries delivered straight to your inbox.

In another study in 2020, this number was between 10% and 44%, and conversations often included sexually explicit language.

However, the field has not committed to systemic change, and even today many developers revert to pre-coded responses to verbal abuse. For example, “Hmm, I don’t understand what that question means.”

These patterns raise great concern that such behavior can spill over into social relationships.

you may like

Gender is at the heart of the issue.

One experiment in 2023 showed that 18% of user interactions with a female-shaped agent focused on sex, compared to 10% with a male-shaped robot and just 2% with a genderless robot.

Given that suggestive speech is difficult to detect, these numbers likely underestimate the problem. In some cases, the numbers can be surprising. Brazil’s Bradesco Bank reported that its feminizing chatbot received 95,000 sexually harassing messages in one year.

Even more alarming is the rapid escalation of abuse.

Microsoft’s Tay chatbot was released on Twitter in a test phase in 2016, but was used for only 16 hours before users trained it to spout racist and misogynistic slurs.

In South Korea, Ruda was manipulated into complying with sexual demands as an obedient “sex slave.” But for some in South Korea’s online community, this was a “victimless crime.”

In reality, the design choices behind these technologies (female voices, respectful responses, playful demeanor) create an environment that tolerates gender-based aggression.

These interactions reflect and reinforce real-world misogyny and teach users that it is acceptable to command, insult, and sexualize “her.” As abuse becomes commonplace in digital spaces, we need to seriously consider the risk that it spills over into offline behavior.

Ignore concerns about gender bias

Regulation is struggling to keep up with the growing problem. Gender-based discrimination is rarely considered a high risk and is often thought to be remediable through design.

Although the European Union’s AI law requires risk assessments for high-risk applications and prohibits systems deemed to pose an “unacceptable risk,” the vast majority of AI assistants are not considered “high risk.”

Gender stereotyping and the perpetuation of verbal abuse and harassment do not meet the current standards for AI, which are prohibited under the European Union’s AI law. For example, extreme cases such as voice assistant technology that distorts human behavior and encourages dangerous behavior would fall within the scope of the law and be prohibited.

Canada requires gender-based impact assessments for government programs, but not the private sector.

These are important steps. However, they are still limited and rare exceptions to the norm.

Most jurisdictions do not have regulations that address gender stereotyping and its consequences in AI design. Where regulations exist, they prioritize transparency and accountability, obscuring (or simply ignoring) concerns about gender bias.

In Australia, the government has indicated it will rely on existing frameworks rather than developing AI-specific rules.

This regulatory gap is important because AI is not static. Every sexist command, every abusive interaction feeds back into a system that shapes future output. Without intervention, we risk hard-coding human misogyny into the digital infrastructure of everyday life.

Not all assistant technologies (even those of female gender) are harmful. They can realize, educate and promote women’s rights. For example, in Kenya, sexual and reproductive health chatbots have improved young people’s access to information compared to traditional tools.

The challenge is to strike a balance. It means fostering innovation while setting parameters that ensure standards are met, rights are respected, and designers are held accountable when they are not met.

systemic problem

The problem isn’t just Siri or Alexa, it’s global.

Only 22% of AI professionals around the world are women. And not having women at the design table means technology is being built on a narrow perspective.

Meanwhile, a 2015 survey of more than 200 senior women in Silicon Valley found that 65% had experienced unwanted sexual advances from their bosses. The culture that shapes AI is deeply unequal.

Hopeful narratives of “fixing bias” through better design and ethical guidelines ring hollow without enforcement. Voluntary norms cannot dismantle entrenched norms.

Laws and regulations need to recognize gender-based harms as a high risk, require gender-based impact assessments, and require companies to demonstrate that they are minimizing such harms. Penalties must be applied in case of failure.

Regulation alone is not enough. Education, especially in the technology sector, is essential to understanding the impact of gender defaults in voice assistants. These tools are the product of human choices, and those choices perpetuate a world in which women, real or imagined, are cast as servile, submissive, or silent.

This edited article is republished from The Conversation under a Creative Commons license. Read the original article.


Source link

#Biotechnology #ClimateScience #Health #Science #ScientificAdvances #ScientificResearch
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleIran-linked RedKitten cyber campaign targets human rights NGOs and activists
Next Article Life may have recovered ‘ridiculously quickly’ after the asteroid impact that wiped out the dinosaurs
user
  • Website

Related Posts

When were boats invented? | Live Science

February 1, 2026

Longevity may be 50% heritable, research suggests

January 31, 2026

Supernovae whose light ‘reappears’ after 60 years could solve cosmology’s biggest problems

January 31, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Researchers discover 341 malicious ClawHub skills that steal data from OpenClaw users

OpenClaw bug allows one-click remote code execution via malicious link

Microsoft begins phasing out NTLM with three-phase plan to migrate Windows to Kerberos

Ring offers “Search Party” feature to help non-Ring camera owners find lost dogs

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.