• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

A Systematic Review of Ethical Concerns with Voice Assistants

September 2, 2023

šŸ”¬ Research Summary by William Seymour, a lecturer in computer science at King’s College London, specializing in AI privacy, security, and ethics.

[Original paper by William Seymour, Xiao Zhan, Mark CotƩ, and Jose Such]


Overview: We’re increasingly becoming aware of ethical issues around the use of voice assistants, such as the privacy implications of having devices that are always listening and the ways that these devices are integrated into existing social structures in the home.Ā This has created a burgeoning area of research across various fields, including computer science, social science, and psychology, which we mapped through a systematic literature review of 117 research articles. In addition to analysis of specific areas of concern, we also explored how different research methods are used and who gets to participate in research on voice assistants.


Introduction

Is Alexa always recording? Something is unsettling about devices that are always listening to you. The research community has been studying people’s concerns with voice assistants and working on ways to mitigate them. While this began with privacy concerns, it has expanded to explore topics like accessibility, the social nature of interactions with voice assistants, how they’re integrated into the home, and how the design of their personalities fits into societal norms and stereotypes around gender.

This work is incredibly diverse and draws from various disciplines, including computer science, law, psychology, and the social sciences, meaning it can be difficult to follow the frontiers or connect discoveries from one field with another. We systematically reviewed research on ethical concerns with voice assistants to address this.

In addition to a detailed analysis of nine major concerns, we also examined how research on the topic is conducted. Despite diversity and inclusion efforts across many disciplines in the field, 94% of the papers recruiting human participants drew them solely or mostly from Europe and North America. There was also a noticeable shift towards using quantitative methods between 2019 and 2021 (41% to 63%) as research moved online during the pandemic.

Key Insights

We analyzed 117 research articles that contained keywords related to ethical concerns with voice assistants, adopting a broad definition of ethics around what voice assistants ā€œoughtā€ to be based on ethical and moral norms. In addition to reading the full text of the papers, we recorded the research methods they used, what group of participants they studied, and where they lived. What follows is a summary of three of the concerns identified by the review (the others are detailed in the full paper).

Choosing What to Share with Vendors and Cohabitants

People were often uncertain when asked to characterize privacy risks around voice assistants. While it’s broadly understood that they are constantly listening, people don’t know how this data is stored and used. This isn’t helped by built-in lights and sounds that give insufficient certainty about when audio is being processed and a lack of trust in on-device controls like microphone mute buttons.

In fact, it’s unclear if any solutions from, e.g., Amazon or Google could be convincing given the widespread skepticism over the honesty of major vendors and common anecdotes of phantom activations. We found that people bypassed device controls with ā€˜informal’ coping mechanisms, such as not using their assistant for sensitive tasks such as banking or healthcare or just unplugging it altogether. 

Shared devices also caused problems by crossing already-established boundaries between people, with housemates, families, and guests having to, e.g., play music or buy items through just one person’s account (and credit card!). Current devices can tell people apart by their voices, so the question arises as to why this kind of shared use isn’t better supported.

Voice Assistant Interactions are Social Interactions

There’s a long history of people anthropomorphizing machines, and voice assistants are no different. Some researchers believe this happens because people don’t fully understand how voice assistants work. Still, experiments have shown that people apply social rules to all computers with voices and that synthetic voices trigger social responses. Essentially, we can’t help but respond to computer speech as if it were human speech. Similarly, there have been studies on using gendered pronouns to refer to voice assistants. However, researchers have since found that even voices engineered to be ā€˜genderless’ are still automatically and subconsciously coded as male or female by listeners (more on this in the next section).

Relatedly, we see that the more human-like a voice assistant acts and sounds, the more capable people expect it to be. This often causes disappointment and frustration when devices ā€œsoundā€ more intelligent than they actually are.

This inability to neatly separate interactions with people and voice assistants raises questions about whether, at a deeper level, we perceive them as people, machines, or something in between. Work in this area, often with small children, shows a mixture of responses and is thus far inconclusive.

Performance of Gender by Voice Assistants

The feminine presentation of major voice assistants is increasingly seen as problematic, and research on social responses to computer voices shows that people apply existing gender biases to voice assistants. Often rationalized by tech companies through people’s preference for women’s voices, their placement as ā€œassistantsā€ perpetuates harmful gender stereotypes when people primarily interact with them to place orders, give commands, and receive reminders. Compounding this, voice assistants have not historically been programmed to respond appropriately to misogyny and sexual harassment: until mid-2017, Siri’s response to being told ā€œyou’re a slutā€ was ā€œI’d blush if I could.ā€

In response to this, there have been attempts to engineer gender-ambiguous voices, studies of which have found that they don’t negatively impact people’s trust and related perceptions, as one might assume. At the same time, these voices are often still coded as male/female by listeners, making their benefits unclear. Some approaches have also been criticized for equating ā€œgenderlessā€ with the mid-point of the male/female binary they intended to break free from. Beyond voices, other design elements, such as the physical appearance of voice assistants, product branding, and pronunciation, have also been found to influence perceptions of gender.

Between the lines

Having a bird’s eye view of the research conducted on a topic like this is incredibly useful: knowing the frontiers sets the agenda for future work, highlighting the major risks and concerns can inform public policy, and analyzing areas of (dis)agreement helps to show which theories are replicable and widely supported by data, and which are not.

Furthermore, understanding who is conducting and participating in research is also vitally important for evaluating ongoing efforts to decolonize science and remove barriers to participation, but also to help us interpret the results that we see in the literature – what about people who use voice assistants outside Europe and North America?

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

related posts

  • Research summary: Different Intelligibility for Different Folks

    Research summary: Different Intelligibility for Different Folks

  • It doesn't tell me anything about how my data is used'': User Perceptions of Data Collection Purpos...

    "It doesn't tell me anything about how my data is used'': User Perceptions of Data Collection Purpos...

  • Distributed Governance: a Principal-Agent Approach to Data Governance - Part 1 Background & Core Def...

    Distributed Governance: a Principal-Agent Approach to Data Governance - Part 1 Background & Core Def...

  • A Case Study: Increasing AI Ethics Maturity in a Startup

    A Case Study: Increasing AI Ethics Maturity in a Startup

  • AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in...

    AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in...

  • A call for a critical look at the metrics for success in the evaluation of AI

    A call for a critical look at the metrics for success in the evaluation of AI

  • Towards Climate Awareness in NLP Research

    Towards Climate Awareness in NLP Research

  • Is AI Greening Global Supply Chains?

    Is AI Greening Global Supply Chains?

  • How Naysan Saran disrupted water quality detection in one hackathon

    How Naysan Saran disrupted water quality detection in one hackathon

  • Worldwide AI Ethics: a review of 200 guidelines and recommendations for AI governance

    Worldwide AI Ethics: a review of 200 guidelines and recommendations for AI governance

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.