š¬ Event summary by Connor Wright, our Partnerships Manager.
Overview: Would you relate to a chatbot or voice assistant more if they were female? Would such conversational AI help you feel less lonely? Our event summary of our collaboration with Salesforce sets out to discuss just that.
Introduction
Would you feel less lonely if you had access to some conversational AI? Does the naming and gender of the chatbot matter? Some believe no, some believe yes, and some believe yes too much. Facilitated by Kathy Baxter, Yoav Schlesinger, Greg Bennett, Connor Wright and Abhishek Gupta, conversational AI as chatbots and voice assistants was deeply explored in our event with Salesforce. With so much potential for both positive and negative outcomes, it makes you start to wonder: can I have a good conversation with a chatbot?
The key takeaways
With our question prompts centering on the gender and name of different chatbots, the technologyās effect on the vulnerable and the potential for bias that it brings with it, immediate reflection on chatbots itself is called into action. Specifically, how does it affect the basic notion of conversation itself?
- What makes a good conversation?
When thinking of programming a chatbot, you may find yourself thinking about what actually makes a good conversation. Is it the speed at which you obtain an answer you were looking for? How did you feel afterwards? The information you learnt along the way? One thingās for sure, the context in which your chatbot is deployed plays a considerable role in determining what a āgoodā conversation is.
- Context matters
If your chatbot is to help customers with their banking, youāre not going to prioritise making the customer feel good about themselves but rather achieve what they set out to do. From here, the distinction between ānarrowā and āwideā chatbots comes to the fore. āNarrowā chatbots are geared towards achieving a particular outcome within a very focussed context, such as a chatbot for a fashion brand helping you find the item of clothing you want. A āwideā chatbot can be found in Alexa and Siri, tasked with a varied list of activities to do and accomplish in a wide range of contexts. For example, asking Alexa to both order something from Amazon and what the weather will be like tomorrow. However, whether ānarrowā or āwideā, does a chatbotās name and gender contribute to its overall success?
- Does the bot need a name and gender?
Itās curious as to why a majority of chatbots have been attributed a name and gender. Some say how, perhaps, the chatbot shouldn’t have either as itās just a machine completing a task. For example, the streaming service Huluās Hulubot in its help centre is an excellent example of a chatbot functioning without having a human name and a gender.
However, the norm is to assign a fixed gender and name. Doing so has a lot to do with the audience that the chatbot is being marketed towards. It is found that people are more likely to welcome into their home a female chatbot by finding the female voice more relatable and trustworthy. One problem this does cause is potentially reinforcing the gender stereotype of āwomen assistantsā, so should you be allowed to choose whether you want your chatbot or voice assistant to be a particular gender?
- Should you be allowed to choose?
ALongside avoiding any potential gender stereotypes, it may be that I feel like talking to different āpeopleā about various things, so deciding on gender and name should be left open. For example, having controls on the chatbot and voice assistant where I can play around with the pitch rather than feeling like Iām talking to the same person all the time.
However, if the choice is left open, you may run the risk of someone wanting to read in a potentially problematic persona (like a timid tone of voice to feel dominant over the chatbot). Furthermore, such customisation possibilities could lead to a severe attachment to the bot itself, making the line between humanity and machine even more blurred.
- Potentially getting too attached
Although the human knowing whom they are talking to is a chatbot, it may still not be enough to prevent humans from getting attached and deceived about their other interlocutor, especially given how people still love anime characters despite knowing what they are. Such attachment could then be exploited by actors taking advantage of any vulnerability to use the human involved. A non-human name could potentially serve to combat this, but not all manipulation in itself could be a bad thing.
- Manipulation can have two sides to it
Manipulation can be used to achieve more positive ends, such as a voice assistant reminding your sick Mother to take her medications and using persuasive language in doing so. Alternatively, the voice assistant could require a parentās voice authentication for certain products to be ordered off Alexa in order to dissuade children from abusing the service. In this sense, while chatbots can serve to manipulate, they can also serve to benefit human existence.
- Chatbots for good
The chatbot evolution from just a machine to being a companion can substantially impact the darkest corners of some human lives. Chatbots can provide a 24/7 communication outlet to help combat loneliness and depression and serve as a digital companion in the dark depths of the pandemic last year. Fortunately, with such experiences not being shared by all, the importance of the chatbot process being inclusionary cannot be underestimated.
- Designing chatbots and voice assistants with all and not just for all
A clear example can be found in differing opinions on voice recordings being done by voice assistants. Here, some are against voice assistants taking recordings of the daily happenings in the house. However, others believe that this can be a crucial step to combating gender violence, with voice recordings potentially proving key evidence of different incidents.
Making this kind of potential service accessible then proves paramount as well. Incorporating local dialects and different accents for optimum benefit to be guaranteed to all is one aspect of judging how good these conversational AI are. However, do we get too carried away with such technology?
- Seeing a chatbot for what it is
Sometimes, without having any benchmarks, we may get over-excited about conversational AI in itself. This is not helped by any personal relationship developed through the voice assistant or chatbot having its own name and gender, which lead us to attribute more humanity to these AI than we actually should. For example, gender for voice assistants is instead just a pitch value to which we attribute our human interpretation, rather than a voice assistant or chatbot actually being on the gender spectrum.
Itās important to note how chatbots and voice assistants are programmed to say things to you rather than to understand you. For example, Sirir may, at some point, be able to book you on a flight in your preferred window seat, but it would not know the reasoning behind it. Maybe this is, in fact, for the best, given the privacy concerns associated with chatbots themselves.
- Privacy issues
Despite the California Chatbot Law and new EU AI law requirements regulating conversational AI, some thought-provoking questions still arise. For example, is it a privacy violation if we look at the chatbot conversations involving other people in our family? If we are caught on our neighbourās Amazon Ring committing a crime, does my neighbour have the right to share such information with the police? The more integrated into our lives conversational AI becomes, the more of these questions will surely surface.
Between the lines
Our event proved both inspiring and stimulating for me. The importance of involving all in the conversational AI design process is now abundantly clear, especially with the field of naming and assigning gender to your chatbot proving extremely rich with questions. I find attributing such aspects to the chatbot important given how it can affect how a conversation is conducted (such as feeling more trustworthy of a female voice assistant or chatbot). Although, what I caution against is attributing too much personality and humanity to such AI, which can only increase the likelihood of negative manipulation and harmful emotional attachment.