In 2024, there will be more than 8 billion artificial intelligence (AI) voice assistants worldwide, or more than one per person on the planet. These assistants are kind and courteous, and are almost always women by default.
Their names also have gender connotations. For example, Apple’s Siri (a Scandinavian female name) means “a beautiful woman who leads to victory.”
you may like
This is not a harmless branding, but a design choice that reinforces existing stereotypes about the roles women and men play in society.
This is not just symbolic either. These choices have real-world consequences, normalizing gender subordination and risking abuse.
The dark side of “friendly” AI
Recent research has revealed a range of harmful interactions with feminized AI.
A 2025 study found that up to 50% of human-machine interactions are verbally abusive.
In another study in 2020, this number was between 10% and 44%, and conversations often included sexually explicit language.
However, the field has not committed to systemic change, and even today many developers revert to pre-coded responses to verbal abuse. For example, “Hmm, I don’t understand what that question means.”
These patterns raise great concern that such behavior can spill over into social relationships.
you may like
Gender is at the heart of the issue.
One experiment in 2023 showed that 18% of user interactions with a female-shaped agent focused on sex, compared to 10% with a male-shaped robot and just 2% with a genderless robot.
Given that suggestive speech is difficult to detect, these numbers likely underestimate the problem. In some cases, the numbers can be surprising. Brazil’s Bradesco Bank reported that its feminizing chatbot received 95,000 sexually harassing messages in one year.
Even more alarming is the rapid escalation of abuse.
Microsoft’s Tay chatbot was released on Twitter in a test phase in 2016, but was used for only 16 hours before users trained it to spout racist and misogynistic slurs.
In South Korea, Ruda was manipulated into complying with sexual demands as an obedient “sex slave.” But for some in South Korea’s online community, this was a “victimless crime.”
In reality, the design choices behind these technologies (female voices, respectful responses, playful demeanor) create an environment that tolerates gender-based aggression.
These interactions reflect and reinforce real-world misogyny and teach users that it is acceptable to command, insult, and sexualize “her.” As abuse becomes commonplace in digital spaces, we need to seriously consider the risk that it spills over into offline behavior.
Ignore concerns about gender bias
Regulation is struggling to keep up with the growing problem. Gender-based discrimination is rarely considered a high risk and is often thought to be remediable through design.
Although the European Union’s AI law requires risk assessments for high-risk applications and prohibits systems deemed to pose an “unacceptable risk,” the vast majority of AI assistants are not considered “high risk.”
Gender stereotyping and the perpetuation of verbal abuse and harassment do not meet the current standards for AI, which are prohibited under the European Union’s AI law. For example, extreme cases such as voice assistant technology that distorts human behavior and encourages dangerous behavior would fall within the scope of the law and be prohibited.
Canada requires gender-based impact assessments for government programs, but not the private sector.
These are important steps. However, they are still limited and rare exceptions to the norm.
Most jurisdictions do not have regulations that address gender stereotyping and its consequences in AI design. Where regulations exist, they prioritize transparency and accountability, obscuring (or simply ignoring) concerns about gender bias.
In Australia, the government has indicated it will rely on existing frameworks rather than developing AI-specific rules.
This regulatory gap is important because AI is not static. Every sexist command, every abusive interaction feeds back into a system that shapes future output. Without intervention, we risk hard-coding human misogyny into the digital infrastructure of everyday life.
Not all assistant technologies (even those of female gender) are harmful. They can realize, educate and promote women’s rights. For example, in Kenya, sexual and reproductive health chatbots have improved young people’s access to information compared to traditional tools.
The challenge is to strike a balance. It means fostering innovation while setting parameters that ensure standards are met, rights are respected, and designers are held accountable when they are not met.
systemic problem
The problem isn’t just Siri or Alexa, it’s global.
Only 22% of AI professionals around the world are women. And not having women at the design table means technology is being built on a narrow perspective.
Meanwhile, a 2015 survey of more than 200 senior women in Silicon Valley found that 65% had experienced unwanted sexual advances from their bosses. The culture that shapes AI is deeply unequal.
Hopeful narratives of “fixing bias” through better design and ethical guidelines ring hollow without enforcement. Voluntary norms cannot dismantle entrenched norms.
Laws and regulations need to recognize gender-based harms as a high risk, require gender-based impact assessments, and require companies to demonstrate that they are minimizing such harms. Penalties must be applied in case of failure.
Regulation alone is not enough. Education, especially in the technology sector, is essential to understanding the impact of gender defaults in voice assistants. These tools are the product of human choices, and those choices perpetuate a world in which women, real or imagined, are cast as servile, submissive, or silent.
This edited article is republished from The Conversation under a Creative Commons license. Read the original article.
Source link
