• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

A Systematic Review of Ethical Concerns with Voice Assistants

September 2, 2023

🔬 Research Summary by William Seymour, a lecturer in computer science at King’s College London, specializing in AI privacy, security, and ethics.

[Original paper by William Seymour, Xiao Zhan, Mark Coté, and Jose Such]


Overview: We’re increasingly becoming aware of ethical issues around the use of voice assistants, such as the privacy implications of having devices that are always listening and the ways that these devices are integrated into existing social structures in the home. This has created a burgeoning area of research across various fields, including computer science, social science, and psychology, which we mapped through a systematic literature review of 117 research articles. In addition to analysis of specific areas of concern, we also explored how different research methods are used and who gets to participate in research on voice assistants.


Introduction

Is Alexa always recording? Something is unsettling about devices that are always listening to you. The research community has been studying people’s concerns with voice assistants and working on ways to mitigate them. While this began with privacy concerns, it has expanded to explore topics like accessibility, the social nature of interactions with voice assistants, how they’re integrated into the home, and how the design of their personalities fits into societal norms and stereotypes around gender.

This work is incredibly diverse and draws from various disciplines, including computer science, law, psychology, and the social sciences, meaning it can be difficult to follow the frontiers or connect discoveries from one field with another. We systematically reviewed research on ethical concerns with voice assistants to address this.

In addition to a detailed analysis of nine major concerns, we also examined how research on the topic is conducted. Despite diversity and inclusion efforts across many disciplines in the field, 94% of the papers recruiting human participants drew them solely or mostly from Europe and North America. There was also a noticeable shift towards using quantitative methods between 2019 and 2021 (41% to 63%) as research moved online during the pandemic.

Key Insights

We analyzed 117 research articles that contained keywords related to ethical concerns with voice assistants, adopting a broad definition of ethics around what voice assistants “ought” to be based on ethical and moral norms. In addition to reading the full text of the papers, we recorded the research methods they used, what group of participants they studied, and where they lived. What follows is a summary of three of the concerns identified by the review (the others are detailed in the full paper).

Choosing What to Share with Vendors and Cohabitants

People were often uncertain when asked to characterize privacy risks around voice assistants. While it’s broadly understood that they are constantly listening, people don’t know how this data is stored and used. This isn’t helped by built-in lights and sounds that give insufficient certainty about when audio is being processed and a lack of trust in on-device controls like microphone mute buttons.

In fact, it’s unclear if any solutions from, e.g., Amazon or Google could be convincing given the widespread skepticism over the honesty of major vendors and common anecdotes of phantom activations. We found that people bypassed device controls with ‘informal’ coping mechanisms, such as not using their assistant for sensitive tasks such as banking or healthcare or just unplugging it altogether. 

Shared devices also caused problems by crossing already-established boundaries between people, with housemates, families, and guests having to, e.g., play music or buy items through just one person’s account (and credit card!). Current devices can tell people apart by their voices, so the question arises as to why this kind of shared use isn’t better supported.

Voice Assistant Interactions are Social Interactions

There’s a long history of people anthropomorphizing machines, and voice assistants are no different. Some researchers believe this happens because people don’t fully understand how voice assistants work. Still, experiments have shown that people apply social rules to all computers with voices and that synthetic voices trigger social responses. Essentially, we can’t help but respond to computer speech as if it were human speech. Similarly, there have been studies on using gendered pronouns to refer to voice assistants. However, researchers have since found that even voices engineered to be ‘genderless’ are still automatically and subconsciously coded as male or female by listeners (more on this in the next section).

Relatedly, we see that the more human-like a voice assistant acts and sounds, the more capable people expect it to be. This often causes disappointment and frustration when devices “sound” more intelligent than they actually are.

This inability to neatly separate interactions with people and voice assistants raises questions about whether, at a deeper level, we perceive them as people, machines, or something in between. Work in this area, often with small children, shows a mixture of responses and is thus far inconclusive.

Performance of Gender by Voice Assistants

The feminine presentation of major voice assistants is increasingly seen as problematic, and research on social responses to computer voices shows that people apply existing gender biases to voice assistants. Often rationalized by tech companies through people’s preference for women’s voices, their placement as “assistants” perpetuates harmful gender stereotypes when people primarily interact with them to place orders, give commands, and receive reminders. Compounding this, voice assistants have not historically been programmed to respond appropriately to misogyny and sexual harassment: until mid-2017, Siri’s response to being told “you’re a slut” was “I’d blush if I could.”

In response to this, there have been attempts to engineer gender-ambiguous voices, studies of which have found that they don’t negatively impact people’s trust and related perceptions, as one might assume. At the same time, these voices are often still coded as male/female by listeners, making their benefits unclear. Some approaches have also been criticized for equating “genderless” with the mid-point of the male/female binary they intended to break free from. Beyond voices, other design elements, such as the physical appearance of voice assistants, product branding, and pronunciation, have also been found to influence perceptions of gender.

Between the lines

Having a bird’s eye view of the research conducted on a topic like this is incredibly useful: knowing the frontiers sets the agenda for future work, highlighting the major risks and concerns can inform public policy, and analyzing areas of (dis)agreement helps to show which theories are replicable and widely supported by data, and which are not.

Furthermore, understanding who is conducting and participating in research is also vitally important for evaluating ongoing efforts to decolonize science and remove barriers to participation, but also to help us interpret the results that we see in the literature – what about people who use voice assistants outside Europe and North America?

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Declaration on the ethics of brain-computer interfaces and augment intelligence

    Declaration on the ethics of brain-computer interfaces and augment intelligence

  • Dual Governance: The intersection of centralized regulation and crowdsourced safety mechanisms for G...

    Dual Governance: The intersection of centralized regulation and crowdsourced safety mechanisms for G...

  • Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

    Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

  • Mapping AI Arguments in Journalism and Communication Studies

    Mapping AI Arguments in Journalism and Communication Studies

  • On the Construction of Artificial Moral Agents Agents

    On the Construction of Artificial Moral Agents Agents

  • Towards A Unified Utilitarian Ethics Framework for Healthcare Artificial Intelligence

    Towards A Unified Utilitarian Ethics Framework for Healthcare Artificial Intelligence

  • An Algorithmic Introduction to Savings Circles

    An Algorithmic Introduction to Savings Circles

  • International Institutions for Advanced AI

    International Institutions for Advanced AI

  • Bound by the Bounty: Collaboratively Shaping Evaluation Processes for Queer AI Harms

    Bound by the Bounty: Collaboratively Shaping Evaluation Processes for Queer AI Harms

  • Transparency as design publicity: explaining and justifying inscrutable algorithms

    Transparency as design publicity: explaining and justifying inscrutable algorithms

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.