• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

More Trust, Less Eavesdropping in Conversational AI

March 2, 2022

🔬 Column by Julia Anderson, a writer and conversational UX designer exploring how technology can make us better humans.

Part of the ongoing Like Talking to a Person series


Voice technology poses unique privacy challenges that make it difficult to earn consumer trust. Lack of user trust is a massive hurdle to the growth and ethical use of conversational AI. As Big Tech companies and governments attempt to define data privacy rules, conversational AI must continuously become more transparent, compliant and accountable.

Data fuel machine learning. In conversational AI, the data include what we say to our devices and each other, even in the privacy of our homes. Data dignity advocates have called for more control over our personal data, and regulations have grown alongside new technologies.

Eavesdropping at home

A common concern about voice assistants is the fear that someone, beyond your device, is listening to what you say. This alone makes people wary of putting machines with microphones around their houses. Beyond sensitive conversation being overheard, the voices of unintended people, such as children or guests, may be captured.

While voice assistants are one of the most advanced forms of conversational AI technologies, they become more intelligent when trained on better data. Such training requires humans to review user voice recordings from devices. These recordings are transcribed, annotated and returned to the system to train the AI to better respond to various situations.

While AI systems constantly improve based on the data it is trained on, the way companies acquire that data create trustworthy products. Amazon and Google claim their smart speakers only review an “extremely small number of interactions” to improve the customer experience. An Echo device does not record all your conversations, but rather “wakes up” to talk to you only after it hears certain words (i.e., “Alexa”). Companies like Google also provide options to delete voice history or mute devices to give users more control over their interactions.

Technology moves faster than regulation

Despite some safeguards, people deserve other opportunities to control how conversational AI collects personal data. Third-party apps or skills may not adhere to the same data collection guidelines as the device manufacturer, nor are they required to. The existing data privacy rights outlined in Europe’s General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA) are hardly universal. In fact, these rules are not unique to voice interfaces and AI, which may use data differently than other technologies.

The world of voice assistants has grown from the platforms of a few Big Tech companies to several independently developed and owned assistants. Introducing third-party providers, through voice apps or integrations, also complicates how data are gathered and used. This is the consumer equivalent of being tracked on websites by cookies, which are small files that remember information about you every time you visit a site.

Nowadays, many companies ask for explicit consent to collect data when you land on their website. This push towards opt-in data collection preserves your personal information before it is shared to many entities, sometimes for advertising purposes. However, this kind of explicit consent is not common practice in conversational AI.

Preserving your voice

The nuances of voice technology make it difficult to create a one-size-fits-all approach to privacy. One way to assuage the fear of being recorded is through on-device processing. Rather than send your voice to the cloud, where humans may later review certain requests, your device would handle everything directly. Apple is taking this step towards transparency through Siri’s on-device processing to also speed up performance.

When it comes to data retention, the European Data Protection Board published guidelines reminding companies to ask themselves: is it necessary to store all voice recordings? Perhaps remembering payment information is beneficial to companies and consumers, but what about recent web searches? Users often wonder what purpose all their data serves. Transparency around how people can delete personal data is crucial for building trust despite these doubts.

Increasing transparency may require other novel measures. The Open Voice Network explores what that could sound like. For example, people could enter “training mode,” where the AI asks for permission to use data for training. This data could be deleted post-processing. Other suggestions include voice assistants giving proactive suggestions such as “Do you want to hear XYZ company’s privacy policy?” or by adapting privacy policies for text-to-speech playback.

These practices are key when any sort of biometric data are used, like in voice authentication. An ideal approach is active enrollment, a type of opt-in, using a quick explanation of how voice matching would benefit the product experience. When implemented correctly, this type of technology could even verify health records or secure online portals. However, using audio recordings of someone else’s voice may introduce another type of privacy invasion.

Vocal accountability

Every bit of data gathered is a puzzle piece about one’s life. While the commands we give our voice assistants may be innocuous, perhaps what the AI is inferring about is a misrepresentation. In this case, what we don’t know could harm future experiences with these products.

Keeping AI transparent, asking for user consent and providing data controls are all steps in the right direction. If we treat our voices as personally as we do our passwords, then companies should continue keeping our identities safe.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

related posts

  • The Watsons Meet Watson: A Call for Carative AI

    The Watsons Meet Watson: A Call for Carative AI

  • A call for a critical look at the metrics for success in the evaluation of AI

    A call for a critical look at the metrics for success in the evaluation of AI

  • Machines as teammates: A research agenda on AI in team collaboration

    Machines as teammates: A research agenda on AI in team collaboration

  • Customization is Key: Four Characteristics of Textual Affordances for Accessible Data Visualizatio...

    "Customization is Key": Four Characteristics of Textual Affordances for Accessible Data Visualizatio...

  • Research summary: Sponge Examples: Energy-Latency Attacks on Neural Networks

    Research summary: Sponge Examples: Energy-Latency Attacks on Neural Networks

  • Science Communications for Explainable Artificial Intelligence

    Science Communications for Explainable Artificial Intelligence

  • AI Art and Misinformation: Approaches and Strategies for Media Literacy and Fact-Checking

    AI Art and Misinformation: Approaches and Strategies for Media Literacy and Fact-Checking

  • Humans are not Boltzmann Distributions: Challenges and Opportunities for Modelling Human Feedback an...

    Humans are not Boltzmann Distributions: Challenges and Opportunities for Modelling Human Feedback an...

  • Governance by Algorithms (Research Summary)

    Governance by Algorithms (Research Summary)

  • Robustness and Usefulness in AI Explanation Methods

    Robustness and Usefulness in AI Explanation Methods

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.