• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

In 2020, Nobody Knows You’re a Chatbot

November 2, 2020

Written by Connor Wright, research intern at FairlyAI — a software-as-a-service AI audit platform which provides AI quality assurance for automated decision making systems with focus on AI equality. He is also a researcher here at MAIEI.


The classic 90’s guideline of “On the internet, nobody knows you’re a dog” has evolved. Instead of nobody knowing you’re a different kind of being, the tagline now refers to whether you’re interacting with someone who actually exists or not. The rise of chatbots has pulled into question what the very core of conversation ought to be, as well as ethical issues alongside it. So, what actually is a chatbot? Who has taken this issue seriously? Can chatbots actually be a force for good? I’ll answer all 3 of these questions throughout this discussion.

To start with, an interesting distinction can be made in the chatbot arena, namely between chatbots and social bots. A chatbot is a bot that is involved in direct human dialogue with a singular human counterpart (such as a virtual assistant on a website). A social bot is a bot that is not directly involved in dialogue with a human and rather is orientated towards disseminating content (such as automated profiles spreading fake news on Facebook and Twitter). What makes this distinction even more interesting is its role in the most prominent chatbot legislation I could find, the Bolstering Online Transparency Act of California.

In 2019, California released the Act (now referred to as the California Act) in order to help combat the ability to deceive that chatbots possess. Chatbots had been able to spread false information at alarming rates, and often able to convince human actors to make a certain political or business decision thinking they have been talking with a human. As a result, chatbots risks centre around their ability to spread false information that can then harbour negative impacts in society.

Hence, the California Act aims to mitigate this. Here, a chatbot is deemed by the California Act to be an online account where most (if not all) of the content is not the result of a person. From here, the act solely aims to legislate chatbots that are involved in human interaction in order to deceive. Some may then say that social bots are excluded as they don’t actually get involved with interacting with humans, just solely feeding them content. However, they would be wrong. The fact that social bots have the intention for the content to be acted upon by humans, they too fall under the remit of the act. Above all, any automated account that is looking to deceive a human within California will have to face up to the California Act. Now, how does it accomplish this?

The proposed solution is devilishly simple. Whenever a chatbot engages with a human, the first message has to contain a clear and non-conspicuous declaration that the chatbot is indeed a chatbot. For example, a chatbot message such as “Hey, how can I help you?” doesn’t cut it. Instead, the chatbot ought to say something along the lines of “Hi I’m Connor, your automated virtual assistant. How may I help you?”. In order to avoid such protocol being taken advantage of, the FTC issued some guidelines on how this ought to be adhered to. For example, the font and colour of the text used to introduce the chatbot must be clear, while the customer is not to have to scroll anywhere to discover the disclosure. Given the problem being addressed by the legislation, are chatbots only aimed to deceive?

Fortunately, there is a positive side to the emergence of chatbots. Their automated nature has allowed businesses to dedicate valuable time to other more pressing issues as opposed to time-consuming admin tasks. Chatbots have been able to permit businesses to provide 24/7 customer care as well, with virtual assistants answering brief customer queries no matter what time of day and in what time zone, increasing customer satisfaction, reducing hold times over the phone, and thus easing the workload on customer services.

Furthermore, chatbots have been able to start being trained to spot common typos that customers may make. Providing data sets with words and their associated typos has allowed the virtual assistant to still be encompassing of different language skills of customers, as well as avoiding having to ask the customer to repeat what they’re saying and cause frustration. The California Act’s distinction requirement also allows the customer to be able to expect potentially having to phrase their messages slightly differently to avoid such frustration. In this way, chatbots can be seen as augmentative and not just deceitful. 

On the internet, nobody knows you’re a chatbot unless you’re in California. While the spread of false information through automated accounts and misleading people online is still rife, the California Act shows a positive way forward to helping the public navigate the issue. Chatbots have already demonstrated their ability to augment the human experience, especially in the business arena. So, while there exists the possibility for chatbots to be abused, there also exists the possibility to combat this and truly harness the benefits chatbots can bring.


Fairly’s mission is to provide quality assurance for automated decision making systems. Their flagship product focuses on providing an easy-to-use tool that researchers, startups and enterprises can use to compliment their existing AI solutions regardless whether they were developed in-house or with third party systems. Learn more at fairly.ai.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • Research summary: Beyond a Human Rights Based Approach To AI Governance: Promise, Pitfalls and Plea

    Research summary: Beyond a Human Rights Based Approach To AI Governance: Promise, Pitfalls and Plea

  • The Impact of Recommendation Systems on Opinion Dynamics: Microscopic versus Macroscopic Effects

    The Impact of Recommendation Systems on Opinion Dynamics: Microscopic versus Macroscopic Effects

  • Seeing Like a Toolkit: How Toolkits Envision the Work of AI Ethics

    Seeing Like a Toolkit: How Toolkits Envision the Work of AI Ethics

  • Top 10 Takeaways from our Conversation with Salesforce about Conversational AI

    Top 10 Takeaways from our Conversation with Salesforce about Conversational AI

  • From AI Winter to AI Hype: The Story of AI in Montreal

    From AI Winter to AI Hype: The Story of AI in Montreal

  • “Welcome to AI”; a talk given to the Montreal Integrity Network

    “Welcome to AI”; a talk given to the Montreal Integrity Network

  • Algorithms as Social-Ecological-Technological Systems: an Environmental Justice lens on Algorithmic ...

    Algorithms as Social-Ecological-Technological Systems: an Environmental Justice lens on Algorithmic ...

  • Toward an Ethics of AI Belief

    Toward an Ethics of AI Belief

  • The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand (Research Summary)

    The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand (Research Summary)

  • Measuring Fairness of Text Classifiers via Prediction Sensitivity

    Measuring Fairness of Text Classifiers via Prediction Sensitivity

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.