• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Top 10 Takeaways from our Conversation with Salesforce about Conversational AI

July 6, 2021

🔬 Event summary by Connor Wright, our Partnerships Manager.


Overview: Would you relate to a chatbot or voice assistant more if they were female? Would such conversational AI help you feel less lonely? Our event summary of our collaboration with Salesforce sets out to discuss just that. 


Introduction

Would you feel less lonely if you had access to some conversational AI? Does the naming and gender of the chatbot matter? Some believe no, some believe yes, and some believe yes too much. Facilitated by Kathy Baxter, Yoav Schlesinger, Greg Bennett, Connor Wright and Abhishek Gupta, conversational AI as chatbots and voice assistants was deeply explored in our event with Salesforce. With so much potential for both positive and negative outcomes, it makes you start to wonder: can I have a good conversation with a chatbot?

The key takeaways

With our question prompts centering on the gender and name of different chatbots, the technology’s effect on the vulnerable and the potential for bias that it brings with it, immediate reflection on chatbots itself is called into action. Specifically, how does it affect the basic notion of conversation itself?

  1. What makes a good conversation?

When thinking of programming a chatbot, you may find yourself thinking about what actually makes a good conversation. Is it the speed at which you obtain an answer you were looking for? How did you feel afterwards? The information you learnt along the way? One thing’s for sure, the context in which your chatbot is deployed plays a considerable role in determining what a ‘good’ conversation is. 

  1. Context matters

If your chatbot is to help customers with their banking, you’re not going to prioritise making the customer feel good about themselves but rather achieve what they set out to do. From here, the distinction between ‘narrow’ and ‘wide’ chatbots comes to the fore. ‘Narrow’ chatbots are geared towards achieving a particular outcome within a very focussed context, such as a chatbot for a fashion brand helping you find the item of clothing you want. A ‘wide’ chatbot can be found in Alexa and Siri, tasked with a varied list of activities to do and accomplish in a wide range of contexts. For example, asking Alexa to both order something from Amazon and what the weather will be like tomorrow. However, whether ‘narrow’ or ‘wide’, does a chatbot’s name and gender contribute to its overall success?

  1. Does the bot need a name and gender?

It’s curious as to why a majority of chatbots have been attributed a name and gender. Some say how, perhaps, the chatbot shouldn’t have either as it’s just a machine completing a task. For example, the streaming service Hulu’s Hulubot in its help centre is an excellent example of a chatbot functioning without having a human name and a gender.

However, the norm is to assign a fixed gender and name. Doing so has a lot to do with the audience that the chatbot is being marketed towards. It is found that people are more likely to welcome into their home a female chatbot by finding the female voice more relatable and trustworthy. One problem this does cause is potentially reinforcing the gender stereotype of ‘women assistants’, so should you be allowed to choose whether you want your chatbot or voice assistant to be a particular gender?

  1. Should you be allowed to choose?

ALongside avoiding any potential gender stereotypes, it may be that I feel like talking to different ‘people’ about various things, so deciding on gender and name should be left open. For example, having controls on the chatbot and voice assistant where I can play around with the pitch rather than feeling like I’m talking to the same person all the time.

However, if the choice is left open, you may run the risk of someone wanting to read in a potentially problematic persona (like a timid tone of voice to feel dominant over the chatbot). Furthermore, such customisation possibilities could lead to a severe attachment to the bot itself, making the line between humanity and machine even more blurred.

  1. Potentially getting too attached

Although the human knowing whom they are talking to is a chatbot, it may still not be enough to prevent humans from getting attached and deceived about their other interlocutor, especially given how people still love anime characters despite knowing what they are. Such attachment could then be exploited by actors taking advantage of any vulnerability to use the human involved. A non-human name could potentially serve to combat this, but not all manipulation in itself could be a bad thing.

  1. Manipulation can have two sides to it

Manipulation can be used to achieve more positive ends, such as a voice assistant reminding your sick Mother to take her medications and using persuasive language in doing so. Alternatively, the voice assistant could require a parent’s voice authentication for certain products to be ordered off Alexa in order to dissuade children from abusing the service. In this sense, while chatbots can serve to manipulate, they can also serve to benefit human existence.

  1. Chatbots for good

The chatbot evolution from just a machine to being a companion can substantially impact the darkest corners of some human lives. Chatbots can provide a 24/7 communication outlet to help combat loneliness and depression and serve as a digital companion in the dark depths of the pandemic last year. Fortunately, with such experiences not being shared by all, the importance of the chatbot process being inclusionary cannot be underestimated.

  1. Designing chatbots and voice assistants with all and not just for all

A clear example can be found in differing opinions on voice recordings being done by voice assistants. Here, some are against voice assistants taking recordings of the daily happenings in the house. However, others believe that this can be a crucial step to combating gender violence, with voice recordings potentially proving key evidence of different incidents. 

Making this kind of potential service accessible then proves paramount as well. Incorporating local dialects and different accents for optimum benefit to be guaranteed to all is one aspect of judging how good these conversational AI are. However, do we get too carried away with such technology?

  1. Seeing a chatbot for what it is

Sometimes, without having any benchmarks, we may get over-excited about conversational AI in itself. This is not helped by any personal relationship developed through the voice assistant or chatbot having its own name and gender, which lead us to attribute more humanity to these AI than we actually should. For example, gender for voice assistants is instead just a pitch value to which we attribute our human interpretation, rather than a voice assistant or chatbot actually being on the gender spectrum.

It’s important to note how chatbots and voice assistants are programmed to say things to you rather than to understand you. For example, Sirir may, at some point, be able to book you on a flight in your preferred window seat, but it would not know the reasoning behind it. Maybe this is, in fact, for the best, given the privacy concerns associated with chatbots themselves.

  1. Privacy issues

Despite the California Chatbot Law and new EU AI law requirements regulating conversational AI, some thought-provoking questions still arise. For example, is it a privacy violation if we look at the chatbot conversations involving other people in our family? If we are caught on our neighbour’s Amazon Ring committing a crime, does my neighbour have the right to share such information with the police? The more integrated into our lives conversational AI becomes, the more of these questions will surely surface.

Between the lines

Our event proved both inspiring and stimulating for me. The importance of involving all in the conversational AI design process is now abundantly clear, especially with the field of naming and assigning gender to your chatbot proving extremely rich with questions. I find attributing such aspects to the chatbot important given how it can affect how a conversation is conducted (such as feeling more trustworthy of a female voice assistant or chatbot). Although, what I caution against is attributing too much personality and humanity to such AI, which can only increase the likelihood of negative manipulation and harmful emotional attachment.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • The Sociology of Race and Digital Society

    The Sociology of Race and Digital Society

  • Animism, Rinri, Modernization; the Base of Japanese Robotics

    Animism, Rinri, Modernization; the Base of Japanese Robotics

  • AI Art and Misinformation: Approaches and Strategies for Media Literacy and Fact-Checking

    AI Art and Misinformation: Approaches and Strategies for Media Literacy and Fact-Checking

  • Visions of Artificial Intelligence and Robots in Science Fiction: a computational analysis

    Visions of Artificial Intelligence and Robots in Science Fiction: a computational analysis

  • Promoting Bright Patterns

    Promoting Bright Patterns

  • Ethical concerns with replacing human relations with humanoid robots: an Ubuntu perspective

    Ethical concerns with replacing human relations with humanoid robots: an Ubuntu perspective

  • Justice in Misinformation Detection Systems

    Justice in Misinformation Detection Systems

  • AI hyped? A horizon scan of discourse on artificial intelligence in education (AIED) and development

    AI hyped? A horizon scan of discourse on artificial intelligence in education (AIED) and development

  • Research summary: Digital Abundance and Scarce Genius: Implications for Wages, Interest Rates, and G...

    Research summary: Digital Abundance and Scarce Genius: Implications for Wages, Interest Rates, and G...

  • FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines

    FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Š MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.