✍️ Column by Sun Gyoo Kang, Lawyer.
This column is dedicated to the memory of Abhishek Gupta, founder of the Montreal AI Ethics Institute, who recently passed away. Abhishek was a brilliant mind and a pioneer in AI ethics. His early encouragement of my work was invaluable. The tech world has lost a visionary, but his legacy of promoting ethical AI will continue to inspire us all.
Disclaimer: The views expressed in this article are solely my own and do not reflect my employer’s opinions, beliefs, or positions. Any opinions or information in this article are based on my experiences and perspectives. Readers are encouraged to form their own opinions and seek additional information as needed.
In the realm of Artificial Intelligence (‘AI’), chatbots have become a popular tool for customer service across various industries including financial institutions, transportation companies, telecommunication companies, and so on. Now, an incident involving Air Canada’s chatbot highlights the importance of responsible AI development and implementation.
The Story of a Chatbot that Deceived a Customer
A customer approached Air Canada’s chatbot to book a flight for a bereavement situation (grandmother passed away). Despite the chatbot recognising the customer’s circumstances and the availability of bereavement fares, it provided misleading instructions. Rather than guiding the customer through the correct process to access the discount, the chatbot advised purchasing a full-price ticket and then seeking a refund for the price difference.
Trusting the AI’s guidance, the customer followed these steps, resulting in a higher expense than necessary. The customer wanted to get a reimbursement and, therefore, requested it from Air Canada. However, Air Canada refused to reimburse the client and stated that it was against its policy to reimburse him.
Feeling deceived by the chatbot’s misinformation, the customer pursued legal action against Air Canada. The case involved the issue of organisational accountability for AI-generated information. Air Canada sought to distance itself from the chatbot’s error, arguing that the AI operated independently as a distinct legal entity. However, the court ruled in favour of the customer, emphasizing that the chatbot was an integral part of Air Canada’s website and thus represented the company. Consequently, Air Canada was held responsible for all information disseminated through its AI assistant.
We could all agree that one of the themes of this incident revolved around the accountability of an artificial intelligent system (‘AIS’) (and the other being its robustness). The pivotal question we need to address is: What does accountability for AI entail?
According to the dictionary of Merriam-Webster, accountability means:
- The quality or state of being accountable
- Especially an obligation or willingness to accept responsibility or to account for one’s actions
The Cornerstone of Law: Assigning Accountability
Accountability stands as a fundamental pillar of our legal system. Simply put, it dictates that those whose actions cause harm to others can be held accountable for the consequences. But who precisely falls under the umbrella of “those” accountable? Let’s delve into the concept.
Adults at the helm: Mature individuals are held accountable for their mental capacity. They possess a body to act with and the ability to understand their actions and the potential ramifications. If Adult A uses their arm and hand to slap Adult B, and Adult A had the intention to do it, we all know that Adult A should be held accountable for the harm caused to Adult B. This translates to conscious decision-making, where they are expected to foresee the outcome. When an adult breaches the law or causes harm through negligence, they may face legal repercussions via criminal charges or civil lawsuits.
The blurring lines of age and understanding: Young children and beloved pets operate under different parameters. Their grasp of right and wrong might be underdeveloped. The law recognises this distinction. In lieu of placing blame on a curious child who breaks a vase or a playful pet who chews furniture, the responsibility typically falls on the adults overseeing them. Parents or guardians are generally held accountable for their children’s actions, while pet owners answer for the mischief (or damage) caused by their furry (or feathery) companions.
Companies: legal “Persons” with obligations: The concept of accountability extends beyond individuals. In most jurisdictions, companies are treated as legal “persons.” This signifies their distinct legal existence independent of their owners and employees. Similar to adult human beings, companies can own property, engage in contracts, and even accrue debts. This legal status also comes with inherent responsibilities. Companies are expected to operate within the legal framework and ensure their employees conduct themselves appropriately. If, due to an employee’s error, a customer is harmed, the company may be held legally accountable depending on the specific circumstances. This stems from the company’s status as a legal entity with a responsibility to ensure its operations are conducted in a manner that safeguards others from harm.
This offers a brief glimpse into the world of accountability. The core principle remains as follows: when something or someone possesses both rights and obligations, they can also be held accountable for their actions if they understand what is wrong or right. This intricate web of accountability helps to ensure a sense of fairness and justice within our society.
Are there any AIS that have legal personality today?
Now, let’s go to the core question. Can AIS be independently held accountable? Before analysing if AIS can be held accountable, let’s check if there exists AIS that already have legal personality.
- Sophia, a humanoid robot created by Hanson Robotics, made history in 2017 by becoming the first robot to receive citizenship. Granted citizenship by Saudi Arabia, Sophia can hold conversations and display realistic facial expressions. Although not truly sentient, Sophia’s citizenship sparked debates about the legal and ethical implications of AI rights.
- Mirai, a chatbot character residing on the Line messaging app, became the first AI to receive residency status in 2018. This symbolic move by Shibuya Ward in Tokyo highlighted their interest in AI and fostering a tech-savvy image. Mirai, programmed to be a friendly child, can chat with users and edit selfies.
There are only two examples as of today, and hardly could we categorize them as Artificial General Intelligence (AGI). I would not use them as examples of a case of legal personality attributed to an AIS.
What is accountability for AI?
While advancements in AI have been nothing short of remarkable in recent years, the notion of independent rights and obligations for AI in its current state remains a topic of vigorous academic discourse. The crux of the argument lies in the current limitations of artificial consciousness. Contemporary AIS, despite their impressive capabilities, lack the level of sentience and true moral agency necessary to be truly responsible actors.
These systems function based on vast datasets and preprogrammed algorithms, essentially sophisticated tools adept at pattern recognition and information processing. This proficiency does not translate to an understanding of the ramifications of their actions or the ability to make independent, ethically grounded choices. Assigning legal rights or solely holding them accountable seems incongruous at this stage in our technological development.
However, the future beckons with the intriguing possibility of AGI — hypothetical machines with human-level or even surpassing intelligence. If and when AGIs develop true consciousness and the ability to comprehend their actions within an ethical framework, the accountability landscape transforms dramatically.
Prominent AI philosopher Margaret Boden champions the development of a future framework for “machine morality.” She envisions a world where AGIs are programmed with core ethical principles and the capacity to learn and refine their moral reasoning. In such a scenario, AGIs could potentially be held accountable for their actions, similar to how humans are judged based on their grasp of right and wrong. This necessitates the establishment of sophisticated ethical frameworks and legal systems specifically designed to address the unique challenges posed by AGIs.
On the other hand, some researchers like Luciano Floridi, a preeminent philosopher specialising in digital ethics, propose a more nuanced perspective. He argues that directly holding AGIs accountable might be impractical. Floridi suggests a focus on robust safety measures and rigorous human oversight to ensure AGIs operate within ethical boundaries. Ultimately, the responsibility would fall on the developers and those who deploy such powerful AI. Floridi emphasises the need for clear lines of accountability, ensuring humans take ownership for the design, development, and implementation of AGIs.
Various Participants
The ramifications of this question extend far beyond theoretical exploration. Consider the case of self-driving cars. If an autonomous vehicle malfunctions and causes an accident, who is to blame? The manufacturer for creating a faulty system? The programmer who wrote a flawed line of code? Or is there a future where the car itself could be held accountable? These are just some of the questions that necessitate a thoughtful exploration of AI accountability.
If we come back to the case of a chatbot, who could be held accountable?
- The Chatbot Developer: The programmers who built the chatbot are the architects of its functionality. If the chatbot malfunctions or provides inaccurate information due to faulty programming, the developers could be liable.
- The Training Data Provider: Chatbots learn from the data they’re trained on. If the training data is biased, incomplete, or contains errors, it can lead the chatbot to deliver biased or misleading responses. The provider of this data might share some responsibility. They could even be data brokers involved.
- The Chatbot Platform: Many chatbots operate on platforms offered by cloud service providers. These platforms have security measures in place, and if a security breach exposes user data or allows the chatbot to be compromised, the platform provider could be accountable.
- Vector Database Providers: In retrieval-augmented generation (RAG) chatbots, the vector database storing information becomes a new accountability concern. Inaccurate or biased data within the database can lead the chatbot to deliver faulty responses. Additionally, retrieval errors where the chatbot pulls irrelevant information can also cause issues. This adds another layer to who might be held responsible if a RAG chatbot mishap occurs.
- The Company Deploying the Chatbot: The company that utilizes the chatbot is ultimately responsible for how it interacts with customers. They have a duty to ensure the chatbot is properly configured, monitored, and provides accurate information. Failure to do so could lead to liability.
- The End User: While primarily on the receiving end, users might also have some responsibility. If a user misinterprets clear instructions or fails to follow prompts provided by the chatbot, it might be difficult to place complete blame on the chatbot or its creators.
- The Chatbot itself: In the future, a chatbot might have the right level of sentience and be morally capable of making decisions similar to human beings. If they could be held legally accountable, then, on the other hand, they should also be expected to be able to amass wealth and even take liability insurance like a corporation!
Conclusion: The AI Accountability Maze
From the rise of the personal computer to the ubiquitous smartphone, innovation has transformed how we live and work. But with each advancement comes a new set of challenges, and the recent case with Air Canada’s chatbot serves as a stark reminder: As AI chatbots become commonplace, the question of accountability becomes paramount.
The Air Canada case highlights the murky legal waters surrounding AI accountability. Was it truly the chatbot’s fault for providing inaccurate bereavement fare information, or should the airline shoulder the blame for deploying an imperfect system? The court’s decision, holding Air Canada responsible, throws down a gauntlet — companies cannot simply absolve themselves of liability when their AI creations mislead or misinform customers.
This has far-reaching implications. Imagine a future where AI chatbots become even more sophisticated. What if a healthcare chatbot dispenses incorrect medical advice? Or a financial chatbot steers a client toward a disastrous investment decision? Will the companies behind these AI tools be held accountable for the consequences?
And which company should be accountable? The developer of the foundational model? The vector stores offering the vector database services? The open-source-based organisations putting erroneous libraries of models? The company that puts all the solutions together and sells the service to the business? Or the company facing the end customers?
The conversation doesn’t stop there. As AI continues to evolve, could we one day see the emergence of AGI? This hypothetical super-intelligence, capable of independent thought and action, would push the boundaries of responsibility even further. Would an AGI be considered an employee, with the company legally liable for its actions (e.g., through vicarious liability), or could they even act independently as consultants deployed by a big tech firm monopolizing the market?
These are complex questions with no easy answers. But the Air Canada case serves as a wake-up call. As we embrace the potential of AI, we must also address the potential pitfalls. Robust governance frameworks, clear lines of accountability, and ongoing risk assessments are crucial. The future of AI is bright, but let’s chart our course responsibly, ensuring these powerful tools work for us, not the other way around.