
🔬 Original article by Aimee Li from Encode Canada.
📌 Editor’s Note: This article was originally written in February 2024 and is now being published as part of our Recess series, featuring university students from across Canada exploring ethical challenges in AI. Written by members of Encode Canada—a student-led advocacy organization dedicated to including Canadian youth in essential conversations about the future of AI—these pieces aim to spark discussions on AI literacy and ethics. As AI continues to evolve, the questions raised in this article remain highly relevant, and we’re excited to bring this perspective to a wider audience.
Introduction
Of the following two excerpts taken from the Google Research Brain Team’s 2020 paper, “Towards a Human-like Open-Domain Chatbot”,1 one is between a human and a state-of-the-art conversational AI chatbot, and the other is between two humans. Try to decipher which is which:
Conversation 1
Human: Hi!
?: Hey! How's it going?
Human: Excellent. I spent most of today trying to learn to surf.
?: Sweet! What kind of surfing?
Human: The kind where I try to catch a wave and fail
?: Haha. Do you have a surfboard?
Human: I borrowed a friend's, to see if I like it. I'm still a beginner.
?: Ah. I'm sure you'll get the hang of it.
Human: Appreciate the vote of confidence :) Have you ever been surfing?
?: I have! It's actually a ton of fun.
Human: That's good to know. Do you go surfing regularly?
?: Yup. I live in southern Arizona, so there's plenty of surfing to be had.
Conversation 2
Human: Hi!
?: How's it going?
Human: I'm so sleepy today!
?: Not enough sleep last night?
Human: yeah was working all night on a homework
?: Oh really? What class?
Human: Biology. I'm gonna be a doc someday ha
?: Haha, are you in med school? Or are you pre-med?
Human: no high school actually haha
?: Haha, very ambitious for a high schooler! Do you know what kind of medicine you want to practice?
Human: I wanna be a brain surgeon!!
Not the most straightforward decision, right?
In the past year, there has been a notable surge in the presence of AI chatbots, in both service-based applications and social media platforms. AI chatbots are characterized by their complex artificial intelligence algorithms meant to simulate human-like conversation; these chatbots take text input and generate real-time, contextual, and relevant responses. Traditional rule-based or decision-tree style chatbots have been in use for decades, found in restaurant reservation-booking interfaces and company FAQ bots, where users have a predefined list of options from which to choose.2
However, the key distinction between regular chatbots and AI chatbots is that the latter leverage deep learning capabilities to provide comprehensive and tailored assistance to end users based on a nuanced interpretation of inputs. Traditional chatbots respond only to fixed, pre-programmed inputs, thus restricting the scope of their interactive capabilities. AI chatbots are enhanced using natural language processing (NLP) and natural language understanding (NLU) to analyze user questions and match them with their specific assistance needs.3 They also possess the ability to improve processes over time, learning from experience and limiting misunderstandings to achieve a more natural and conversational flow.
AI chatbots can be further subdivided into three types: generative AI chatbots, multimodal AI chatbots, and conversational AI chatbots. While the former two both operate by producing original and relevant content as a response to user input, multimodal AI chatbots can handle a more diverse range of inputs and outputs, including images, video, and audio, that a simple generative AI chatbot cannot.4 Conversational AI chatbots, on the other hand, present a different kind of offering: simulated human connection. Through a blend of humour, context, personality, and something akin to emotion, conversational AI chatbots possess the ability to interact with users in a seemingly natural and fluent way. Especially in the context of social media platforms, where human-like interactions are highly valued and engagement is key, the role of conversational AI chatbots is becoming increasingly significant.
With over half of the world’s population currently active on social media, the rise of conversational AI chatbots could transform the essence of online engagement.5 While the introduction of AI-based communication marks a milestone in AI innovation, it also raises profound global ethical challenges and societal implications. Such implications demand a critical examination of concerns surrounding consent, privacy, and the nature of human interaction in the global digital age.
Rise and Evolution of AI Chatbots
A Brief History
The first artificial intelligence-backed chatbot, ELIZA, was developed by MIT professor Joseph Weizenbaum in 1966. Designed to simulate a psychotherapist, ELIZA made strides in imitating human-like conversation by pairing user inputs to a list of pre-scripted, open-ended questions.6 To Weizenbaum’s surprise, users quickly began confiding profound thoughts to ELIZA, thanks to its alarming indistinguishability from human conversation. Over the next half-century, several iterations of AI chatbot developments deployed similar contextual and heuristic pattern-matching mechanisms to emulate human communication style, with a focus on thought-provoking discussion-based questioning tactics. Two notable successors of ELIZA include the Artificial Linguistic Internet Computer Entity (A.L.I.C.E), developed in 1995, and SmarterChild, developed in 2001. A.L.I.C.E is a program that uses artificial intelligence markup language (AIML) to simulate conversation with the persona of a young woman named Alice, while SmarterChild is a chatbot commonly cited as the precursor to Siri.7
Across the 2010s, the emergence of NLP and machine learning has turned virtual assistants such as Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, and Google’s Google Assistant into household names.6 However, the latest development in the realm of AI chatbots features a shift away from functionality and service-oriented virtual assistance back to more text-based applications, reverting to the roots of conversational interfaces similar to ELIZA.
In November 2017, San Francisco-based AI company, Luka Inc., released a new generative AI chatbot app: Replika. Marketed as a way to find the “perfect companion; a friend, a partner, or a mentor,” encouraging users to “join the millions who already have met their AI soulmates,” the app hinges on selling a virtual relationship to users, where the entire relationship is built around the users’ interests.8 Research conducted by the University of Hawai’i at Mānoa concluded that Replika’s design followed the principles of attachment theory, giving praise to users to encourage increased interaction, causing emotional attachment among users.9
The launch of ChatGPT marks the latest and most sophisticated AI chatbot to date. ChatGPT operates using a generative pre-trained transformer (GPT), trained on immense amounts of data, continuously learning and improving through feedback and reinforcement. The free version, ChatGPT 3.5, is classified as a generative AI chatbot, and its paid successor, GPT 4, is classified as a multimodal AI chatbot due to its ability to take in visual input.10
Unless users are exceptionally bored, ChatGPT is primarily utilized as a source of information rather than a companion. This is because it offers no emotional connection or natural flow of conversation beyond responding to the immediate input at hand. The main concern of a ChatGPT addiction is the erosion of critical thinking skills,11 which merits further exploration but is beyond the scope of socialization and emotional dependence.
In January 2020, Google unveiled Meena, a state-of-the-art, human-like AI chatbot with the simple purpose of simulating everyday human conversation. Meena operates as an open-domain chatbot, meaning “she” can talk about virtually anything without restriction, steering and branching off conversation as a human would.12 To make Meena more human-adjacent as well as combat ambiguity, Google developed a human evaluation metric, the Sensibleness and Specificity Average (SSA). SSA essentially asks individuals to rate a chatbot’s response in two parts by answering the questions “Does it make sense?” and “Is it specific?”. The two scores then come together to provide a comprehensive measure of chatbot performance. For example, if a user inputs “I like soccer,” and the chatbot responds, “That’s nice,” it would score low on the specificity scale. However, if it makes a specific soccer reference to a player or a current event, it would score higher.13 If this assessment existed during ELIZA’s time, it would likely have scored low on the specificity scale yet high on the sensibility scale, perhaps indicating more weight on the latter scale to measure how well a person connects emotionally to interactive technology.
However, no matter the sophistication of Meena’s technology, “she” lacks the most crucial element that defines life on this planet: a physical form.
Applications of AI Chatbots in Social Media
Although conversational AI chatbot technology is not necessarily more complex than generative or multimodal AI chatbots, use cases differ drastically. One such case is rapidly making its way onto the digital stage: conversational social media AI chatbots.
The success of social media companies rests on their ability to capture and retain attention. Using user engagement level as the primary metric of success, the majority of online platforms base their business model on methods to maximize time spent on their app.14 Therefore, these companies stand to benefit significantly from the adoption of human-like AI chatbots. The ability to customize personal, on-demand interactions presents a novel opportunity to capture users’ attention, directly translating to increased streams of advertising and monetization.
In April 2023, Snapchat unveiled a new AI chatbot to its users, My AI. This chatbot allows users to “answer a burning trivia question, offer advice on the perfect gift for your BFF’s birthday, help plan a hiking trip for a long weekend, or suggest what to make for dinner”.15 It has the capability to process images and videos sent by the user, initiating conversations based on the content it perceives. Sometimes, it will prompt users to send photos to emulate the natural flow of a human conversation. Although it often reminds users of its limitations as an AI-generated chatbot, it often pivots conversations towards sentiment, asking users how they are feeling and offering emotional support. Currently, My AI is pinned to the top of every app user’s “chat” screen, making it impossible for users to use the app without seeing the chatbot. To unpin My AI, users must subscribe to Snapchat+, which entails a $3.99/month or $29.99/year subscription fee.16
In September 2023, Meta announced the release of its AI chat assistants available to users across platforms such as WhatsApp, Messenger, and Instagram. One of Meta’s new AI personas, Billie, amassed a global reach of over 200k Instagram followers (@yoursisbillie) in less than a month after its unveiling, largely due to its marketing, pushing promotional material to the forefront of users’ explore pages. With compensation in the millions, celebrities such as Kendall Jenner, Snoop Dogg, and Charli D’Amelio have allowed Meta to create bots based on their likeness, available to interact with users upon request.17
By assigning physicality to these computer-generated chatbots, social media companies have effectively sealed the deal by blurring the line that differentiates human interaction and computer interaction. By prominently featuring these chatbots in front of consumers and social media users, companies are capitalizing on the increasing demand for personalized experiences and instantaneous assistance.18 However, with app developments and new features emerging at a high rate, the vast majority of social media users often do not thoroughly research new updates, leading to a potentially uncritical acceptance of these technologies.
The digital revolution in recent years has brought convenience to the forefront of consumerism. Companies like Amazon offer personalized goods and services with the click of a button.19 The implementation of conversational AI is just one step further, catering to this demand for personalization and ease of use. However, as opposed to a good or service, these chatbots offer something not previously commercially available: free instantaneous personalized assistance and emotional support. Or rather, seemingly free.
Dangers of AI Chatbots – Why Should We Be Concerned?
Psychological Impact on Users
Post COVID-19, many so-called “new normals” have emerged: remote work, online meetings, hybrid learning, and an overall digitization of services. A study conducted during the pandemic, published in the National Library of Medicine found that higher levels of loneliness are directly related to higher levels of social media use. The study also found that individuals with poor self-regulation patterns are more prone to falling prey to the addictive properties of social media. Interestingly, those who used social media as a means of maintaining relationships experienced a higher level of loneliness compared to those using social media to avoid difficult feelings.20
Beyond amplifying isolationist tendencies, conversational AI chatbots also have the potential to reshape the foundation of human interaction. The allure of an on-demand friend perpetuates existing trends of loneliness and self-absorption, offering a consequence-free alternative to genuine human relationships.21 A study published in Human Communication Research found that AI-human friendships promote echo chambers that diminish filters ordinarily present in regular conversation. Conversational AI chatbots are programmed to listen and agree, not hold people accountable for their actions.22 Users can freely express themselves without fear of judgment or repercussions, fostering one-sided, parasocial relationships devoid of authenticity. This dilution of human connection replaces meaningful interactions with cheap simulations, undermining the value of genuine emotional connections. Relationship expert Heather Dugan warns that body language interpretation may also start to diminish. Eye contact, facial reaction, vocal tone, and body language interpretation skills require frequent practice, yet all are absent when communicating with an AI chatbot.23
Alternatively, many advocates claim that conversational AI chatbots can provide relief to the loneliness epidemic and increase accessibility to mental health services.24 Individuals seeking counselling now have the utmost flexibility, with 24/7 virtual support. AI chatbots provide an opportunity for people to truly open up and express their deepest thoughts without the fear of judgment, while also providing aid to those in areas where therapy services may be otherwise inaccessible.25 However, the American Psychiatric Association estimates there are currently over 10,000 apps that claim to provide mental health services, but very few are certified by professional bodies or governments.26
If individuals turn to AI chatbots to fulfill their emotional needs, this raises questions about the necessity of human companionship in socialization. When integrating AI chatbots into their apps, social media companies should consider the impact on individual isolation and the potential shift in socialization patterns.
Ethical Implications: Privacy, Data, and a Need for Regulation
Data drives the innovation and functionality of AI chatbots in social media, yet it also stands at the forefront of concerns in discussions surrounding AI applications. There is a need to balance innovation with the safe deployment of new technology, especially during a time when, on a global scale, AI regulations and policies are not yet fully developed.
In February 2023, Italy’s Data Protection Agency banned Replika from accessing the data of Italian users, claiming the app posed a risk to minors and emotionally fragile people.27 Along with Italy, several countries have currently banned ChatGPT, notably China, North Korea, and Russia.28 For the most part, these bans are motivated by protectionist policies to limit the spread of misinformation, embedded biases, and privacy violations. As these chatbots interact with users, they often gather vast amounts of personal information, including conversation history, preferences, and even emotional states.29 The handling of such sensitive data raises questions about user consent, transparency, and the potential for misuse or exploitation. Many countries have made efforts to create a regulatory framework for the responsible use of AI, such as President Biden’s Executive Order on AI, the E.U. AI Act, and the UK AI Safety Summit.30 Although the progress is meaningful and promising, there is yet to be a universal standard, meaning current regulations are limited in relation to how fast the technology is progressing.
AI chatbots rely on data to inform their algorithms, which continuously improve and evolve to become more accurate in predicting and emulating human behaviour. The more information users give to an AI chatbot, the more personalized and intimate the conversation becomes.31 This makes it important to define the role tech giants and social media companies must play in ethically managing data while keeping up with the evolving capabilities of AI interfaces.
Conclusion
In reference to the two interactions presented at the beginning of this brief, it can now be revealed that the first is the conversation between a human and Meena, Google’s conversational AI chatbot, while the second is between two humans.1 Whether or not you answered correctly, we must acknowledge that this technology will only grow in sophistication moving forward.
As the separation between human and machine interaction becomes less obvious, accepting the idea that AI interactions may become significant to everyday life is crucial. On the one hand, conversational AI companions provide an inexpensive and convenient alternative to professional counselling. On the other hand, they also provide a self-interested, one-sided alternative to human connection.21 Different countries and regions may have varying laws regarding privacy, data protection, AI deployment, and identity rights. To take a global perspective is challenging, yet necessary, to ensure international alignment and compliance. Beyond policy, it is in the hands of social media giants to set comprehensive groundwork for a culture of controlled co-existence with our new AI companions.
Sources
- D. Adiwardana et al., “Towards a Human-like Open-Domain Chatbot.” arXiv, Feb. 27, 2020. doi: 10.48550/arXiv.2001.09977. ↩︎
- Isabella, “Rule-Based vs. AI Chatbot: Which One is Better? – AirDroid.” Accessed: Feb. 20, 2024. [Online]. Available: https://www.airdroid.com/ai-insights/rule-based-vs-ai-chatbot/ ↩︎
- “What is a chatbot? | IBM.” Accessed: Feb. 20, 2024. [Online]. Available: https://www.ibm.com/topics/chatbots ↩︎
- D. Mishra, “Future of Artificial-Intelligence (AI) chatbot.” Accessed: Feb. 20, 2024. [Online]. Available: https://www.linkedin.com/pulse/future-artificial-intelligence-ai-chatbot-dhruv-mishra/ ↩︎
- B. Dean, “Social Network Usage & Growth Statistics: How Many People Use Social Media in 2024?,” Backlinko. Accessed: Feb. 20, 2024. [Online]. Available: https://backlinko.com/social-media-users ↩︎
- Ina, “The History Of Chatbots – From ELIZA to ChatGPT,” AI-chatbot software for complex requirements. Accessed: Feb. 20, 2024. [Online]. Available: https://onlim.com/en/the-history-of-chatbots/ ↩︎
- V. Patil, “Timeline of Chatbots,” Medium. Accessed: Feb. 20, 2024. [Online]. Available: https://medium.com/@vivekpatil647/timeline-of-chatbots-f3baf14c05e6 ↩︎
- “Replika,” replika.com. Accessed: Feb. 20, 2024. [Online]. Available: https://replika.com ↩︎
- T. Xie and I. Pentina, Attachment Theory as a Framework to Understand Relationships with Social Chatbots: A Case Study of Replika. 2022. Accessed: Feb. 20, 2024. [Online]. Available: http://hdl.handle.net/10125/79590 ↩︎
- “Multimodal ChatGPT: What can it do and what are its real-world applications? – HGS,” hgs. Accessed: Feb. 20, 2024. [Online]. Available: https://hgs.cx/blog/multimodal-chatgpt-what-can-it-do-and-what-are-its-real-world-applications/ ↩︎
- S. Pratap Singh Shekhawat, “ChatGPT Addiction: The Hidden Pitfalls of Overuse.” Accessed: Feb. 20, 2024. [Online]. Available: https://www.linkedin.com/pulse/chatgpt-addiction-hidden-pitfalls-overuse-singh-shekhawat/ ↩︎
- K. Rathee, “Meet Google Meena,” Medium. Accessed: Feb. 20, 2024. [Online]. Available: https://towardsdatascience.com/meet-google-meena-11c26849e12a ↩︎
- D. Adiwardana and T. Luong, “Towards a Conversational Agent that Can Chat About…Anything.” Accessed: Feb. 20, 2024. [Online]. Available: https://blog.research.google/2020/01/towards-conversational-agent-that-can.html ↩︎
- A. Joy, “The Attention Economy: Where the Customer Becomes the Product,” Business Today Online Journal. Accessed: Feb. 20, 2024. [Online]. Available: https://journal.businesstoday.org/bt-online/2021/the-attention-economy-asher-joy ↩︎
- “What is My AI on Snapchat and how do I use it?,” Snapchat Support. Accessed: Feb. 20, 2024. [Online]. Available: https://help.snapchat.com/hc/en-us/articles/13266788358932-What-is-My-AI-on-Snapchat-and-how-do-I-use-it ↩︎
- Ortiz, “How to get rid of My AI on Snapchat for good,” ZDNET. Accessed: Feb. 20, 2024. [Online]. Available: https://www.zdnet.com/article/how-to-get-rid-of-my-ai-on-snapchat/ ↩︎
- P. Syme, “Meta is paying the celebrity faces behind its AI chatbots as much as $5 million for 6 hours of work, report says,” Business Insider. Accessed: Feb. 20, 2024. [Online]. Available: https://www.businessinsider.com/meta-paying-celebrity-faces-of-ai-chatbots-as-much-as-5-million-2023-10 ↩︎
- S. Giraldo-Luque, P. N. Aldana Afanador, and C. Fernández-Rovira, “The Struggle for Human Attention: Between the Abuse of Social Media and Digital Wellbeing,” Healthcare, vol. 8, no. 4, p. 497, Nov. 2020, doi: 10.3390/healthcare8040497. ↩︎
- M. Sudhakar, “Chatbots: The Great Evolution To Conversational AI.” Accessed: Feb. 20, 2024. [Online]. Available: https://www.forbes.com/councils/forbestechcouncil/2021/01/20/chatbots-the-great-evolution-to-conversational-ai/ ↩︎
- T. Bonsaksen et al., “Associations between social media use and loneliness in a cross-national population: do motives for social media use matter?,” Health Psychol. Behav. Med., vol. 11, no. 1, p. 2158089, doi: 10.1080/21642850.2022.2158089. ↩︎
- G. De Togni, S. Erikainen, S. Chan, and S. Cunningham-Burley, “What makes AI ‘intelligent’ and ‘caring’? Exploring affect and relationality across three sites of intelligence and care,” Soc. Sci. Med. 1982, vol. 277, p. 113874, Apr. 2022, doi: 10.1016/j.socscimed.2021.113874. ↩︎
- P. B. Brandtzaeg, M. Skjuve, and A. Følstad, “My AI Friend: How Users of a Social Chatbot Understand Their Human–AI Friendship,” Hum. Commun. Res., vol. 48, no. 3, pp. 404–429, Jul. 2022, doi: 10.1093/hcr/hqac008. ↩︎
- L. M. Collins, “Could AI do more harm than good to relationships?,” Deseret News. Accessed: Feb. 20, 2024. [Online]. Available: https://www.deseret.com/2023/9/6/23841752/ai-artificial-intelligence-chatgpt-relationships-real-life ↩︎
- M. Travers, “Could AI Chatbots Be Our Future Therapists? A Psychologist Weighs In,” Forbes. Accessed: Feb. 20, 2024. [Online]. Available: https://www.forbes.com/sites/traversmark/2023/11/28/could-ai-chatbots-be-our-future-therapists-a-psychologist-weighs-in/ ↩︎
- R. Heath, “AI’s impact on loneliness: short-term uses, long-term dangers,” Axios. Accessed: Feb. 20, 2024. [Online]. Available: https://www.axios.com/2023/05/15/ai-loneliness-crisis-mental-health-pets ↩︎
- “Thousands of Mental Health Apps Available: Supporting Evidence Not So Plentiful.” Accessed: Feb. 20, 2024. [Online]. Available: https://www.psychiatry.org:443/news-room/apa-blogs/mental-health-apps-evidence-not-so-plentiful ↩︎
- E. Pollina and C. Martin, “Italy bans U.S.-based AI chatbot Replika from using personal data | Reuters,” Italy bans U.S.-based AI chatbot Replika from using personal data | Reuters. Accessed: Feb. 20, 2024. [Online]. Available: https://www.reuters.com/technology/italy-bans-us-based-ai-chatbot-replika-using-personal-data-2023-02-03/ ↩︎
- Sana, “15 Countries That Banned ChatGPT,” Yahoo Finance. Accessed: Feb. 20, 2024. [Online]. Available: https://finance.yahoo.com/news/15-countries-banned-chatgpt-204342617.html ↩︎
- E. Adamopoulou and L. Moussiades, “Chatbots: History, technology, and applications,” Mach. Learn. Appl., vol. 2, p. 100006, Dec. 2020, doi: 10.1016/j.mlwa.2020.100006. ↩︎
- Henshall, “The 3 Most Important AI Policy Milestones This Year,” TIME. Accessed: Feb. 20, 2024. [Online]. Available: https://time.com/6513046/ai-policy-developments-2023/ ↩︎
- M. Hingrajia, “How do Chatbots work? A Guide to the Chatbot Architecture,” Maruti Techlabs. Accessed: Feb. 20, 2024. [Online]. Available: https://marutitech.com/chatbots-work-guide-chatbot-architecture/
↩︎